All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2014 |
Alexander C. Schütz; Dirk Kerzel; David Souto Saccadic adaptation induced by a perceptual task Journal Article In: Journal of Vision, vol. 14, no. 5, pp. 1–19, 2014. @article{Schuetz2014a, The human motor system and muscles are subject to fluctuations in the short and long term. Motor adaptation is classically thought of as a low-level process that compensates for the error between predicted and executed movements in order to maintain movement accuracy. Contrary to a low-level account, accurate movements might be only a means to support high-level behavioral and perceptual goals. To isolate the influence of high-level goals in adaptation of saccadic eye movements, we manipulated perceptual task requirements in the absence of low-level errors. Observers had to discriminate one character within a peripheral array of characters. Between trials, the location of this character within the array was changed. This manipulation led to an immediate strategic change and a slower, gradual adaptation of saccade amplitude and direction. These changes had a similar magnitude to classical saccade adaptation and transferred at least partially to reactive saccades without a perceptual task. These results suggest that a perceptual task can modify oculomotor commands by generating a top-down error signal in saccade maps just like a bottom-up visual position error. Hence saccade adaptation not only maintains saccadic targeting accuracy, but also optimizes gaze behavior for the behavioral goal, showing that perception shapes even low-level oculomotor mechanisms. |
D. Samuel Schwarzkopf; Elaine J. Anderson; Benjamin Haas; Sarah J. White; Geraint Rees Larger extrastriate population receptive fields in autism spectrum disorders Journal Article In: Journal of Neuroscience, vol. 34, no. 7, pp. 2713–2724, 2014. @article{Schwarzkopf2014, Previous behavioral research suggests enhanced local visual processing in individuals with autism spectrum disorders (ASDs). Here we used functional MRI and population receptive field (pRF) analysis to test whether the response selectivity of human visual cortex is atypical in individuals with high-functioning ASDs compared with neurotypical, demographically matched controls. For each voxel, we fitted a pRF model to fMRI signals measured while participants viewed flickering bar stimuli traversing the visual field. In most extrastriate regions, perifoveal pRFs were larger in the ASD group than in controls. We observed no differences in V1 or V3A. Differences in the hemodynamic response function, eye movements, or increased measurement noise could not account for these results; individuals with ASDs showed stronger, more reliable responses to visual stimulation. Interestingly, pRF sizes also correlated with individual differences in autistic traits but there were no correlations with behavioral measures of visual processing. Our findings thus suggest that visual cortex in ASDs is not characterized by sharper spatial selectivity. Instead, we speculate that visual cortical function in ASDs may be characterized by extrastriate cortical hyperexcitability or differential attentional deployment. |
Caspar M. Schwiedrzik; Christian C. Ruff; Andreea Lazar; Frauke C. Leitner; Wolf Singer; Lucia Melloni Untangling perceptual memory: Hysteresis and adaptation map into separate cortical networks Journal Article In: Cerebral Cortex, vol. 24, no. 5, pp. 1152–1164, 2014. @article{Schwiedrzik2014, Perception is an active inferential process in which prior knowledge is combined with sensory input, the result of which determines the contents of awareness. Accordingly, previous experience is known to help the brain "decide" what to perceive. However, a critical aspect that has not been addressed is that previous experience can exert 2 opposing effects on perception: An attractive effect, sensitizing the brain to perceive the same again (hysteresis), or a repulsive effect, making it more likely to perceive something else (adaptation). We used functional magnetic resonance imaging and modeling to elucidate how the brain entertains these 2 opposing processes, and what determines the direction of such experience-dependent perceptual effects. We found that although affecting our perception concurrently, hysteresis and adaptation map into distinct cortical networks: a widespread network of higher-order visual and fronto-parietal areas was involved in perceptual stabilization, while adaptation was confined to early visual areas. This areal and hierarchical segregation may explain how the brain maintains the balance between exploiting redundancies and staying sensitive to new information. We provide a Bayesian model that accounts for the coexistence of hysteresis and adaptation by separating their causes into 2 distinct terms: Hysteresis alters the prior, whereas adaptation changes the sensory evidence (the likelihood function). |
Katrin Preckel; Karlijn Massar Imprinting effects on visual attention to faces and judgments of attractiveness Journal Article In: EvoS Journal, vol. 6, no. 2, pp. 1–16, 2014. @article{Preckel2014, Previous studies have shown that human mate-choice can be influenced by exposure to opposite-sex parent characteristics. In this study we examined whether there are sexual-imprinting effects of fathers on their daughter's partner-choice. To this end our participants were asked to bring a picture of their father to the laboratory, and next an eye-tracker was used to determine participants' gaze directions while they were judging male faces for attractiveness. Participants were single, female undergraduates (n = 50, M age = 22 |
Katrin H. Preller; Marcus Herdener; Leonhard Schilbach; Philliipp Stampfli; Lea M. Hulka; Matthias Vonmoos; Nina Ingold; Kai Vogeley; Philippe N. Tobler; Erich Seifritz; Boris B. Quednow Functional changes of the reward system underlie blunted response to social gaze in cocaine users Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 7, pp. 2842–2847, 2014. @article{Preller2014, Social interaction deficits in drug users likely impede treatment, increase the burden of the affected families, and consequently contribute to the high costs for society associated with addiction. Despite its significance, the neural basis of altered social interaction in drug users is currently unknown. Therefore, we investigated basal social gaze behavior in cocaine users by applying behavioral, psychophysiological, and functional brain-imaging methods. In study I, 80 regular cocaine users and 63 healthy controls completed an interactive paradigm in which the participants' gaze was recorded by an eye-tracking device that controlled the gaze of an anthropomorphic virtual character. Valence ratings of different eye-contact conditions revealed that cocaine users show diminished emotional engagement in social interaction, which was also supported by reduced pupil responses. Study II investigated the neural underpinnings of changes in social reward processing observed in study I. Sixteen cocaine users and 16 controls completed a similar interaction paradigm as used in study I while undergoing functional magnetic resonance imaging. In response to social interaction, cocaine users displayed decreased activation of the medial orbitofrontal cortex, a key region of reward processing. Moreover, blunted activation of the medial orbitofrontal cortex was significantly correlated with a decreased social network size, reflecting problems in real-life social behavior because of reduced social reward. In conclusion, basic social interaction deficits in cocaine users as observed here may arise from altered social reward processing. Consequently, these results point to the importance of reinstatement of social reward in the treatment of stimulant addiction. |
Elsie Premereur; Wim Vanduffel; Peter Janssen The effect of FEF microstimulation on the responses of neurons in the lateral intraparietal area Journal Article In: Journal of Cognitive Neuroscience, vol. 26, no. 8, pp. 1672–1684, 2014. @article{Premereur2014, The macaque FEFs and the lateral intraparietal area (LIP) are high-level cortical areas involved in both spatial attention and oculomotor behavior. Stimulating FEF at a level below the threshold for evoking saccades increases fMRI activity and gamma power in area LIP, but the precise effect exerted by the FEF on LIP neurons is unknown. In our study, we recorded LIP single-unit activity during a visually guided saccade task with a peripherally presented go signal during microstimulation of FEF. We found that FEF microstimulation increased the LIP spike rate imme- diately after the highly salient go signal inside the LIP receptive field when both target and go signal were presented inside the receptive field, and no other possible go cues were present on the screen. The effect of FEF microstimulation on the LIP response was positive until at least 800msec aftermicrostimulation had ceased, but reversed for longer trial durations. Therefore, FEF microstimulation can modulate the LIP spike rate only when attention is selectively directed toward the stimulated location. These results provide the first direct evidence for LIP spike rate modulations caused by FEF microstimulation, thus showing that FEF activity can be the source of top–down control ofarea LIP. |
Nicholas S. C. Price; J. Blum Motion perception correlates with volitional but not reflexive eye movements Journal Article In: Neuroscience, vol. 277, pp. 435–445, 2014. @article{Price2014, Visually-driven actions and perception are traditionally ascribed to the dorsal and ventral visual streams of the cortical processing hierarchy. However, motion perception and the control of tracking eye movements both depend on sensory motion analysis by neurons in the dorsal stream, suggesting that the same sensory circuits may underlie both action and perception. Previous studies have suggested that multiple sensory modules may be responsible for the perception of low- and high-level motion, or the detection versus identification of motion direction. However, it remains unclear whether the sensory processing systems that contribute to direction perception and the control of eye movements have the same neuronal constraints. To address this, we examined inter-individual variability across 36 observers, using two tasks that simultaneously assessed the precision of eye movements and direction perception: in the smooth pursuit task, observers volitionally tracked a small moving target and reported its direction; in the ocular following task, observers reflexively tracked a large moving stimulus and reported its direction. We determined perceptual-oculomotor correlations across observers, defined as the correlation between each observer's mean perceptual precision and mean oculomotor precision. Across observers, we found that: (i) mean perceptual precision was correlated between the two tasks; (ii) mean oculomotor precision was correlated between the tasks, and (iii) oculomotor and perceptual precision were correlated for volitional smooth pursuit, but not reflexive ocular following. Collectively, these results demonstrate that sensory circuits with common neuronal constraints subserve motion perception and volitional, but not reflexive eye movements. |
Heinz-Werner Priess; Nils Heise; Florian Fischmeister; Sabine Born; Herbert Bauer; Ulrich Ansorge Attentional capture and inhibition of saccades after irrelevant and relevant cues Journal Article In: Journal of Ophthalmology, pp. 1–12, 2014. @article{Priess2014, Attentional capture is usually stronger for task-relevant than irrelevant stimuli, whereas irrelevant stimuli can trigger equal or even stronger amounts of inhibition than relevant stimuli. Capture and inhibition, however, are typically assessed in separate trials, leaving it open whether or not inhibition of irrelevant stimuli is a consequence of preceding attentional capture by the same stimuli or whether inhibition is the only response to these stimuli. Here, we tested the relationship between capture and inhibition in a setup allowing for estimates of the capture and inhibition based on the very same trials. We recorded saccadic inhibition after relevant and irrelevant stimuli. At the same time, we recorded the N2pc, an event-related potential, reflecting initial capture of attention. We found attentional capture not only for, relevant but importantly also for irrelevant stimuli, although the N2pc was stronger for relevant than irrelevant stimuli. In addition, inhibition of saccades was the same for relevant and irrelevant stimuli. We conclude with a discussion of the mechanisms that are responsible for these effects. |
Claudio M. Privitera; Thom Carney; Stanley A. Klein; Mario Aguilar Analysis of microsaccades and pupil dilation reveals a common decisional origin during visual search Journal Article In: Vision Research, vol. 95, pp. 43–50, 2014. @article{Privitera2014, During free viewing visual search, observers often refixate the same locations several times before and after target detection is reported with a button press. We analyzed the rate of microsaccades in the sequence of refixations made during visual search and found two important components. One related to the visual content of the region being fixated; fixations on targets generate more microsaccades and more microsaccades are generated for those targets that are more difficult to disambiguate. The other empathizes non-visual decisional processes; fixations containing the button press generate more microsaccades than those made on the same target but without the button press. Pupil dilation during the same refixations reveals a similar modulation. We inferred that generic sympathetic arousal mechanisms are part of the articulated complex of perceptual processes governing fixational eye movements. |
Liina Pylkkänen; Douglas K. Bemis; Estibaliz Blanco Elorrieta Building phrases in language production: An MEG study of simple composition Journal Article In: Cognition, vol. 133, no. 2, pp. 371–384, 2014. @article{Pylkkaenen2014, Although research on language production has developed detailed maps of the brain basis of single word production in both time and space, little is known about the spatiotemporal dynamics of the processes that combine individual words into larger representations during production. Studying composition in production is challenging due to difficulties both in controlling produced utterances and in measuring the associated brain responses. Here, we circumvent both problems using a minimal composition paradigm combined with the high temporal resolution of magnetoencephalography (MEG). With MEG, we measured the planning stages of simple adjective-noun phrases ('red tree'), matched list controls ('red, blue'), and individual nouns ('tree') and adjectives ('red'), with results indicating combinatorial processing in the ventro-medial prefrontal cortex (vmPFC) and left anterior temporal lobe (LATL), two regions previously implicated for the comprehension of similar phrases. These effects began relatively quickly (~180 ms) after the presentation of a production prompt, suggesting that combination commences with initial lexical access. Further, while in comprehension, vmPFC effects have followed LATL effects, in this production paradigm vmPFC effects occurred mostly in parallel with LATL effects, suggesting that a late process in comprehension is an early process in production. Thus, our results provide a novel neural bridge between psycholinguistic models of comprehension and production that posit functionally similar combinatorial mechanisms operating in reversed order. |
Carolyn Quam; Daniel Swingley Processing of lexical stress cues by young children Journal Article In: Journal of Experimental Child Psychology, vol. 123, no. 1, pp. 73–89, 2014. @article{Quam2014, Although infants learn an impressive amount about their native-language phonological system by the end of the first year of life, after the first year children still have much to learn about how acoustic dimensions cue linguistic categories in fluent speech. The current study investigated what children have learned about how the acoustic dimension of pitch indicates the location of the stressed syllable in familiar words. Preschoolers (2.5- to 5-year-olds) and adults were tested on their ability to use lexical-stress cues to identify familiar words. Both age groups saw pictures of a bunny and a banana and heard versions of "bunny" and "banana" in which stress either was indicated normally with convergent cues (pitch, duration, amplitude, and vowel quality) or was manipulated such that only pitch differentiated the words' initial syllables. Adults (n=48) used both the convergent cues and the isolated pitch cue to identify the target words as they unfolded. Children (n=206) used the convergent stress cues but not pitch alone in identifying words. We discuss potential reasons for children's difficulty in exploiting isolated pitch cues to stress despite children's early sensitivity to pitch in language. These findings contribute to a view in which phonological development progresses toward the adult state well past infancy. |
A. P. Raghuraman; Camillo Padoa-Schioppa Integration of multiple determinants in the neuronal computation of economic values Journal Article In: Journal of Neuroscience, vol. 34, no. 35, pp. 11583–11603, 2014. @article{Raghuraman2014, Economic goods may vary on multiple dimensions (determinants). A central conjecture in decision neuroscience is that choices between goods are made by comparing subjective values computed through the integration of all relevant determinants. Previous work identified three groups of neurons in the orbitofrontal cortex (OFC) of monkeys engaged in economic choices: (1) offer value cells, which encode the value of individual offers; (2) chosen value cells, which encode the value of the chosen good; and (3) chosen juice cells, which encode the identity of the chosen good. In principle, these populations could be sufficient to generate a decision. Critically, previous work did not assess whether offer value cells (the putative input to the decision) indeed encode subjective values as opposed to physical properties of the goods, and/or whether offer value cells integrate multiple determinants. To address these issues, we recorded from the OFC while monkeys chose between risky outcomes. Confirming previous observations, three populations of neurons encoded the value of individual offers, the value of the chosen option, and the value-independent choice outcome. The activity of both offer value cells and chosen value cells encoded values defined by the integration of juice quantity and probability. Furthermore, both populations reflected the subjective risk attitude of the animals. We also found additional groups of neurons encoding the risk associated with a particular option, the risky nature of the chosen option, and whether the trial outcome was positive or negative. These results provide substantial support for the conjecture described above and for the involvement of OFC in good-based decisions. |
Anis Rahman; Denis Pellerin; Dominique Houzet Influence of number, location and size of faces on gaze in video Journal Article In: Journal of Eye Movement Research, vol. 7, no. 2, pp. 1–11, 2014. @article{Rahman2014, Many studies have reported the preference for faces and influence of faces on gaze, most of them in static images and a few in videos. In this paper, we study the influence of faces in complex free-viewing videos, with respect to the effects of number, location and size of the faces. This knowledge could be used to enrich a face pathway in a visual saliency model. We used eye fixation data from an eye movement experiment, hand-labeled all the faces in the videos watched, and compared the labeled face regions against the eye fixations. We observed that fixations made are in proximity to, or inside the face regions. We found that 50% of the fixations landed directly on face regions that occupy less than 10% of the entire visual scene. Moreover, the fixation duration on videos with face is longer than without face, and longer than fixation duration on static images with faces. Finally, we analyzed the three influencing factors (Eccentricity, Area, Closeness) with linear regression models. For one face, the E +A combined model is slightly better than the E model and better than the A model. For two faces, the three variables (E,A,C) are tightly coupled and the E +A+C model had the highest score. |
Brandon C. W. Ralph; Paul Seli; Vivian O. Y. Cheng; Grayden J. F. Solman; Daniel Smilek Running the figure to the ground: Figure-ground segmentation during visual search Journal Article In: Vision Research, vol. 97, pp. 65–73, 2014. @article{Ralph2014, We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. |
Gary E. Raney; Spencer J. Campbell; Joanna C. Bovee Using eye movements to evaluate the cognitive processes involved in text comprehension Journal Article In: Journal of Visualized Experiments, no. 83, pp. 1–7, 2014. @article{Raney2014, The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli. |
James Rankin; Andrew Isaac Meso; Guillaume S. Masson; O. Faugeras; Pierre Kornprobst Bifurcation study of a neural field competition model with an application to perceptual switching in motion integration Journal Article In: Journal of Computational Neuroscience, vol. 36, no. 2, pp. 193–213, 2014. @article{Rankin2014, Perceptual multistability is a phenomenon in which alternate interpretations of a fixed stimulus are perceived intermittently. Although correlates between activity in specific cortical areas and perception have been found, the complex patterns of activity and the underlying mechanisms that gate multistable perception are little understood. Here, we present a neural field competition model in which competing states are represented in a continuous feature space. Bifurcation analysis is used to describe the different types of complex spatio-temporal dynamics produced by the model in terms of several parameters and for different inputs. The dynamics of the model was then compared to human perception investigated psychophysically during long presentations of an ambiguous, multistable motion pattern known as the barberpole illusion. In order to do this, the model is operated in a parameter range where known physiological response properties are reproduced whilst also working close to bifurcation. The model accounts for characteristic behaviour from the psychophysical experiments in terms of the type of switching observed and changes in the rate of switching with respect to contrast. In this way, the modelling study sheds light on the underlying mechanisms that drive perceptual switching in different contrast regimes. The general approach presented is applicable to a broad range of perceptual competition problems in which spatial interactions play a role. |
Anne K. Rau; Korbinian Moeller; Karin Landerl The transition from sublexical to lexical processing in a consistent orthography: An eye-tracking study Journal Article In: Scientific Studies of Reading, vol. 18, no. 3, pp. 224–233, 2014. @article{Rau2014, We studied the transition in predominant reading strategy from serial sublexical processing to more parallel lexical processing as a function of word familiarity in German children of Grades 2, 3, 4, and adults. High-frequency words, low-frequency words, and nonwords of differing length were embedded in sentences and presented in an eye-tracking paradigm. The size of the word length effect was used as an indicator of serial sublexical decoding. When controlling for the generally higher processing times in younger readers, the effect of length over reading development was not direct but modulated by familiarity: Length effects were comparable between items of differing familiarity for Grade 2, whereas from Grade 3, length effects increased with decreasing familiarity. These findings suggest that Grade 2 children apply serial sublexical decoding as a default reading strategy to most items, whereas reading by direct lexical access is increasingly dominant in more experienced readers. |
Keith Rayner The gaze-contingent moving window in reading: Development and review Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 242–258, 2014. @article{Rayner2014, The development of the gaze-contingent moving window paradigm (McConkie & Rayner, 1975, 1976) is discussed and the results of the earliest research are reviewed. The original work suggested that the region from which readers can obtain useful information during an eye fixation in reading, or the perceptual span, was asymmetric around the fixation point, and extended from 3?4 letter spaces to the left of fixation to about 14?15 letter spaces to the right of fixation. Subsequent research which substantiated these findings is discussed. Then more recent research using the moving window paradigm to investigate the following topics (1) effects of reading speed, (2) effects of reading skill, (3) effects of the writing system, (4) effects due to age, (5) effects related to deafness, and (5) effects related to schizophrenia is discussed. Finally, some extensions of gaze-contingent paradigms to areas other than reading are discussed.$backslash$nThe development of the gaze-contingent moving window paradigm (McConkie & Rayner, 1975, 1976) is discussed and the results of the earliest research are reviewed. The original work suggested that the region from which readers can obtain useful information during an eye fixation in reading, or the perceptual span, was asymmetric around the fixation point, and extended from 3?4 letter spaces to the left of fixation to about 14?15 letter spaces to the right of fixation. Subsequent research which substantiated these findings is discussed. Then more recent research using the moving window paradigm to investigate the following topics (1) effects of reading speed, (2) effects of reading skill, (3) effects of the writing system, (4) effects due to age, (5) effects related to deafness, and (5) effects related to schizophrenia is discussed. Finally, some extensions of gaze-contingent paradigms to areas other than reading are discussed. |
Keith Rayner; Elizabeth R. Schotter Semantic preview benefit in reading English: The effect of initial letter capitalization Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1617–1628, 2014. @article{Rayner2014a, A major controversy in reading research is whether semantic information is obtained from the word to the right of the currently fixated word (word n ⫹ 1). Although most evidence has been negative in English, semantic preview benefit has been observed for readers of Chinese and German. In the present experiment, we investigated whether the discrepancy between English and German may be attributable to a difference in visual properties of the orthography: the first letter of a noun is always capitalized in German, but is only occasionally capitalized in English. This visually salient property may draw greater attention to the word during parafoveal preview and thus increase preview benefit generally (and lead to a greater opportunity for semantic preview benefit). We used English target nouns that can either be capitalized (e.g., We went to the critically acclaimed Ballet ofParis while on vacation.) or not (e.g., We went to the critically acclaimed ballet that was showing in Paris.) and manipulated the capitalization of the preview accordingly, to determine whether capitalization modulates preview benefit in English. The gaze-contingent boundary paradigm was used with identical, semantically related, and unrelated pre- views. Consistent with our hypothesis, we found numerically larger preview benefits when the preview/ target was capitalized than when it was lowercase. Crucially, semantic preview benefit was not observed when the preview/target word was not capitalized, but was observed when the preview/target word was capitalized. |
Keith Rayner; Jinmian Yang; Susanne Schuett; Timothy J. Slattery The effect of foveal and parafoveal masks on the eye movements of older and younger readers Journal Article In: Psychology and Aging, vol. 29, no. 2, pp. 205–212, 2014. @article{Rayner2014b, In the present study, we examined foveal and parafoveal processing in older compared with younger readers by using gaze-contingent paradigms with 4 conditions. Older and younger readers read sentences in which the text was either a) presented normally, b) the foveal word was masked as soon as it was fixated, c) all of the words to the left of the fixated word were masked, or d) all of the words to the right of the fixated word were masked. Although older and younger readers both found reading when the fixated word was masked quite difficult, the foveal mask increased sentence reading time more than 3-fold (3.4) for the older readers (in comparison with the control condition in which the sentence was presented normally) compared with the younger readers who took 1.3 times longer to read sentences in the foveal mask condition (in comparison with the control condition). The left and right parafoveal masks did not disrupt reading as severely as the foveal mask, though the right mask was more disruptive than the left mask. Also, there was some indication that the younger readers found the right mask condition relatively more disruptive than the left mask condition. |
Scott A. Reed; Paul Dassonville Adaptation to leftward-shifting prisms enhances local processing in healthy individuals Journal Article In: Neuropsychologia, vol. 56, no. 1, pp. 418–427, 2014. @article{Reed2014, In healthy individuals, adaptation to left-shifting prisms has been shown to simulate the symptoms of hemispatial neglect, including a reduction in global processing that approximates the local bias observed in neglect patients. The current study tested whether leftward prism adaptation can more specifically enhance local processing abilities. In three experiments, the impact of local and global processing was assessed through tasks that measure susceptibility to illusions that are known to be driven by local or global contextual effects. Susceptibility to the rod-and-frame illusion - an illusion disproportionately driven by both local and global effects depending on frame size - was measured before and after adaptation to left- and right-shifting prisms. A significant increase in rod-and-frame susceptibility was found for the left-shifting prism group, suggesting that adaptation caused an increase in local processing effects. The results of a second experiment confirmed that leftward prism adaptation enhances local processing, as assessed with susceptibility to the simultaneous-tilt illusion. A final experiment employed a more specific measure of the global effect typically associated with the rod-and-frame illusion, and found that although the global effect was somewhat diminished after leftward prism adaptation, the trend failed to reach significance (p=.078). Rightward prism adaptation had no significant effects on performance in any of the experiments. Combined, these findings indicate that leftward prism adaptation in healthy individuals can simulate the local processing bias of neglect patients primarily through an increased sensitivity to local visual cues, and confirm that prism adaptation not only modulates lateral shifts of attention, but also prompts shifts from one level of processing to another. |
Hsin-Hui Lin; Shu-Fei Yang An eye movement study of attribute framing in online shopping Journal Article In: Journal of Marketing Analytics, vol. 2, no. 2, pp. 72–80, 2014. @article{Lin2014c, This study uses an eye-tracking method to explore the framing effect on observed eye movements and purchase intention in online shopping. The results show that negative framing induces more active eye movements. Function attributes and non-functionality attributes attract more eye movements and with higher intensity. And the scanpath on the areas of interest reveals a certain pattern. These findings have practical implications for e-sellers to improve communication with customers. |
John J. H. Lin; Sunny S. J. Lin Tracking eye movements when solving geometry problems with handwriting devices Journal Article In: Journal of Eye Movement Research, vol. 7, no. 1, pp. 1–15, 2014. @article{Lin2014a, The present study investigated the following issues: (1) whether differences are evident in the eye movement measures of successful and unsuccessful problem-solvers; (2) what is the relationship between perceived difficulty and eye movement measures; and (3) whether eye movements in various AOIs differ when solving problems. Sixty-three 11th grade students solved five geometry problems about the properties of similar triangles. A digital drawing tablet and sensitive pressure pen were used to record the responses. The results indicated that unsuccessful solvers tended to have more fixation counts, run counts, and longer dwell time on the problem area, whereas successful solvers focused more on the calculation area. In addition, fixation counts, dwell time, and run counts in the diagram area were positively correlated with the perceived difficulty, suggesting that understanding similar triangles may require translation or mental rotation. We argue that three eye movement measures (i.e., fixation counts, dwell time, and run counts) are appropriate for use in examining problem solving given that they differentiate successful from unsuccessful solvers and correlate with perceived difficulty. Furthermore, the eye-tracking technique provides objective measures of students' cognitive load for instructional designers. |
John J. H. Lin; Sunny S. J. Lin Cognitive load for configuration comprehension in computer-supported geometry problem solving: An eye movement perspective Journal Article In: International Journal of Science and Mathematics Education, vol. 12, no. 3, pp. 605–627, 2014. @article{Lin2014b, The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations involving the number of informational elements, the number of element interactivities and the level of mental operations were assumed to account for the increasing difficulty. A sample of 311 9th grade students solved five geometry problems that required knowledge of similar triangles in a computer-supported environment. In the second experiment, 63 participants solved the same problems and eye movements were recorded. The results indicated that (1) the five problems differed in pass rate and in self-reported cognitive load; (2) because the successful solvers were very swift in pattern recognition and visual integration, their fixation did not clearly show valuable information; (3) more attention and more time (shown by the heat maps, dwell time and fixation counts) were given to read the more difficult configurations than to the intermediate or easier configurations; and (4) in addition to number of elements and element interactivities, the level of mental operations accounts for the major cognitive load sources of configuration comprehension. The results derived some implications for design principles of geometry diagrams in secondary school mathematics textbooks. |
Angelika Lingnau; Thorsten Albrecht; Jens Schwarzbach; Dirk Vorberg Visual search without central vision - no single pseudofovea location is best Journal Article In: Journal of Eye Movement Research, vol. 7, no. 2, pp. 1–14, 2014. @article{Lingnau2014, We typically fixate targets such that they are projected onto the fovea for best spatial resolution. Macular degeneration patients often develop fixation strategies such that targets are projected to an intact eccentric part of the retina, called pseudofovea. A longstanding debate concerns which pseudofovea-location is optimal for non-foveal vision. We examined how pseudofovea position and eccentricity affect performance in visual search, when vision is restricted to an off-foveal retinal region by a gaze-contingent display that dynamically blurs the stimulus except within a small viewing window (forced field location). Trained normally sighted participants were more accurate when forced field location was congruent with the required scan path direction; this contradicts the view that a single pseudofovea location is generally best. Rather, performance depends on the congruence between pseudofovea location and scan path direction. |
Christina Liossi; Daniel E. Schoth; Hayward J. Godwin; Simon P. Liversedge Using eye movements to investigate selective attention in chronic daily headache Journal Article In: Pain, vol. 155, no. 3, pp. 503–510, 2014. @article{Liossi2014, Previous research has demonstrated that chronic pain is associated with biased processing of pain-related information. Most studies have examined this bias by measuring response latencies. The present study extended previous work by recording eye movement behaviour in individuals with chronic headache and in healthy controls while participants viewed a set of images (ie, facial expressions) from 4 emotion categories (pain, angry, happy, neutral). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze on the picture that was initially fixated, and the mean number of visits, and mean fixation duration per image category. The eye movement behaviour of the participants in the chronic headache group was characterised by a bias in initial shift of orienting to pain. There was no evidence of individuals with chronic headache visiting more often, or spending significantly more time viewing, pain images compared to other images. Both participant groups showed a significantly greater bias to maintain gaze longer on happy images, relative to pain, angry, and neutral images. Results are consistent with a pain-related bias that operates in the orienting of attention on pain-related stimuli, and suggest that chronic pain participants' attentional biases for pain-related information are evident even when other emotional stimuli are present. Pain-related information-processing biases appear to be a robust feature of chronic pain and may have an important role in the maintenance of the disorder. |
Alexandra List; Lucica Iordanescu; Marcia Grabowecky; Satoru Suzuki Haptic guidance of overt visual attention Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 8, pp. 2221–2228, 2014. @article{List2014, Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention. |
Pingping Liu; Weijun Li; Buxin Han; Xingshan Li Effects of anomalous characters and small stroke omissions on eye movements during the reading of Chinese sentences Journal Article In: Ergonomics, vol. 57, no. 11, pp. 1659–1669, 2014. @article{Liu2014, We investigated the influence of typographical errors (typos) on eye movements and word recognition in Chinese reading. Participants' eye movements were tracked as they read sentences in which the target words were presented (1) normally, (2) with the initial stroke of the first characters removed (the omitted stroke condition) or (3) the first characters replaced by anomalous characters (the anomalous character condition). The results indicated that anomalous characters caused longer fixation durations and shorter outgoing forward saccade lengths than the correct words. This finding is consistent with the prediction of the theory of the processing-based strategy. Additionally, anomalous characters strongly disrupted lexical processing and whole sentence comprehension, but small stroke omissions did not. Implications of the effect of processing difficulty on forward saccade targeting for models of eye movement control during Chinese reading are discussed. |
Pingping Liu; Xingshan Li Inserting spaces before and after words affects word processing differently in Chinese: Evidence from eye movements Journal Article In: British Journal of Psychology, vol. 105, no. 1, pp. 57–68, 2014. @article{Liu2014a, Unlike in English, there are no spaces between printed words in Chinese. In this study, we explored how inserting a space before or after a word affects the processing of that word in Chinese reading. Native Chinese readers' eye movements were monitored as they read sentences with different presentation conditions. The results show that inserting a space after a word facilitates its processing, but inserting a space before a word does not show this effect and inhibits the processing of that word in some cases. Our results are consistent with the prediction of a word segmentation and recognition model in Chinese Li et al., 2009, Cognit. Psychol., 58, 525. Additionally, we found that a space guides the initial landing position on the word: the initial landing position was further away from the space that inserted into the text, whether it was before or after a word. |
Tzu Chien Liu; Melissa Hui Mei Fan; Fred Paas In: Computers and Education, vol. 70, pp. 9–20, 2014. @article{Liu2014b, Recent research has shown that students involved in computer-based second language learning prefer to use a digital dictionary in which a word can be looked up by clicking on it with a mouse (i.e., click-on dictionary) to a digital dictionary in which a word can be looked up by typing it on a keyboard (i.e., key-in dictionary). This study investigated whether digital dictionary format also differentially affects students' incidental acquisition of spelling knowledge and cognitive load during second language learning. A comparison between a click-on dictionary condition, a key-in dictionary condition, and a non-dictionary control condition for 45 Taiwanese students learning English as a foreign language revealed that learners who used a key-in dictionary invested more time investment on dictionary consultation than learners who used a click-on dictionary. However, on a subsequent unexpected spelling test the key-in group invested less time investment and performed better than the click-on group. The theoretical and practical implications of the results are discussed. |
Simon P. Liversedge; Chuanli Zang; Manman Zhang; Xuejun Bai; Guoli Yan; Denis Drieghe The effect of visual complexity and word frequency on eye movements during Chinese reading Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 441–457, 2014. @article{Liversedge2014, Eye movements of native Chinese readers were monitored when they read sentences containing single-character target words orthogonally manipulated for frequency and visual complexity (number of strokes). Both factors yielded strong main effects on skipping probability but no interaction, with readers skipping visually simple and high frequency words more often. However, an interaction between frequency and complexity was observed on the fixation times on the target words with longer fixations for the low frequency, visually complex words. The results demonstrate that visual complexity and frequency have independent influences on saccadic targeting behaviour during Chinese reading but jointly influence fixation durations and that these two factors differently impact fixation durations and saccade targeting during reading. |
Shih-Yu Lo; Alex O. Holcombe How do we select multiple features? Transient costs for selecting two colors rather than one, persistent costs for color-location conjunctions Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 2, pp. 304–321, 2014. @article{Lo2014, In a previous study Lo, Howard, & Holcombe (Vision Research 63:20-33, 2012), selecting two colors did not induce a performance cost, relative to selecting one color. For example, requiring possible report of both a green and a red target did not yield a worse performance than when both targets were green. Yet a cost of selecting multiple colors was observed when selection needed be contingent on both color and location. When selecting a red target to the left and a green target to the right, superimposing a green distractor to the left and a red distractor to the right impeded performance. Possibly, participants cannot confine attention to a color at a particular location. As a result, distractors that share the target colors disrupt attentional selection of the targets. The attempt to select the targets must then be repeated, which increases the likelihood that the trial terminates when selection is not effective, even for long trials. Consistent with this, here we find a persistent cost of selecting two colors when the conjunction of color and location is needed, but the cost is confined to short exposure durations when the observer just has to monitor red and green stimuli without the need to use the location information. These results suggest that selecting two colors is time-consuming but effective, whereas selection of simultaneous conjunctions is never entirely successful. |
Anna A. Kosovicheva; Benjamin A. Wolfe; David Whitney Visual motion shifts saccade targets Journal Article In: Attention, Perception, and Psychophysics, vol. 76, no. 6, pp. 1778–1788, 2014. @article{Kosovicheva2014, Saccades are made thousands of times a day and are the principal means of localizing objects in our environment. However, the saccade system faces the challenge of accurately localizing objects as they are constantly moving relative to the eye and head. Any delays in processing could cause errors in saccadic localization. To compensate for these delays, the saccade system might use one or more sources of information to predict future target locations, including changes in position of the object over time, or its motion. Another possibility is that motion influences the represented position of the object for saccadic targeting, without requiring an actual change in target position. We tested whether the saccade system can use motion-induced position shifts to update the represented spatial location of a saccade target, by using static drifting Gabor patches with either a soft or a hard aperture as saccade targets. In both conditions, the aperture always remained at a fixed retinal location. The soft aperture Gabor patch resulted in an illusory position shift, whereas the hard aperture stimulus maintained the motion signals but resulted in a smaller illusory position shift. Thus, motion energy and target location were equated, but a position shift was generated in only one condition. We measured saccadic localization of these targets and found that saccades were indeed shifted, but only with a soft-aperture Gabor patch. Our results suggest that motion shifts the programmed locations of saccade targets, and this remapped location guides saccadic localization. |
Christopher K. Kovach; Matthew J. Sutterer; Sara N. Rushia; Adrianna Teriakidis; Rick L. Jenison Two systems drive attention to rewards Journal Article In: Frontiers in Psychology, vol. 5, pp. 46, 2014. @article{Kovach2014, How options are framed can dramatically influence choice preference. While salience of information plays a central role in this effect, precisely how it is mediated by attentional processes remains unknown. Current models assume a simple relationship between attention and choice, according to which preference should be uniformly biased towards the attended item over the whole time-course of a decision between similarly valued items. To test this prediction we considered how framing alters the orienting of gaze during a simple choice between two options, using eye movements as a sensitive online measure of attention. In one condition participants selected the less preferred item to discard and in the other, the more preferred item to keep. We found that gaze gravitates towards the item ultimately selected, but did not observe the effect to be uniform over time. Instead, we found evidence for distinct early and late processes that guide attention according to preference in the first case and task demands in the second. We conclude that multiple time-dependent processes govern attention during choice, and that these may contribute to framing effects in different ways. |
Michael J. Koval; R. Matthew Hutchison; Stephen G. Lomber; Stefan Everling Effects of unilateral deactivations of dorsolateral prefrontal cortex and anterior cingulate cortex on saccadic eye movements Journal Article In: Journal of Neurophysiology, vol. 111, no. 4, pp. 787–803, 2014. @article{Koval2014, The dorsolateral prefrontal cortex (dlPFC) and anterior cingulate cortex (ACC) have both been implicated in the cognitive control of saccadic eye movements by single neuron recording studies in nonhuman primates and functional imaging studies in humans, but their relative roles remain unclear. Here, we reversibly deactivated either dlPFC or ACC subregions in macaque monkeys while the animals performed randomly interleaved pro- and antisaccades. In addition, we explored the whole-brain functional connectivity of these two regions by applying a seed-based resting-state functional MRI analysis in a separate cohort of monkeys. We found that unilateral dlPFC deactivation had stronger behavioral effects on saccades than unilateral ACC deactivation, and that the dlPFC displayed stronger functional connectivity with frontoparietal areas than the ACC. We suggest that the dlPFC plays a more prominent role in the preparation of pro- and antisaccades than the ACC. |
Eileen Kowler; Cordelia D. Aitkin; Nicholas M. Ross; Elio M. Santos; Min Zhao Davida Teller Award Lecture 2013: The importance of prediction and anticipation in the control of smooth pursuit eye movements Journal Article In: Journal of Vision, vol. 14, no. 5, pp. 1–16, 2014. @article{Kowler2014, The ability of smooth pursuit eye movements to anticipate the future motion of targets has been known since the pioneering work of Dodge, Travis, and Fox (1930) and Westheimer (1954). This article reviews aspects of anticipatory smooth eye movements, focusing on the roles of the different internal or external cues that initiate anticipatory pursuit.We present new results showing that the anticipatory smooth eye movements evoked by different cues differ substantially, even when the cues are equivalent in the information conveyed about the direction of future target motion. Cues that convey an easily interpretable visualization of the motion path produce faster anticipatory smooth eye movements than the other cues tested, including symbols associated arbitrarily with the path, and the same target motion tested repeatedly over a block of trials. The differences among the cues may be understood within a common predictive framework in which the cues differ in the level of subjective certainty they provide about the future path. Pursuit may be driven by a combined signal in which immediate sensory motion, and the predictions about future motion generated by sets of cues, are weighted according to their respective levels of certainty. Anticipatory smooth eye movements, an overt indicator of expectations and predictions, may not be operating in isolation, but may be part of a global process in which the brain analyzes available cues, formulates predictions, and uses them to control perceptual, motor, and cognitive processes. |
Jens Kremkow; Jianzhong Jin; Stanley J. Komban; Yushi Wang; Reza Lashgari; Xiaobing Li; Michael Jansen; Qasim Zaidi; Jose-Manuel Alonso Neuronal nonlinearity explains greater visual spatial resolution for darks than lights Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 8, pp. 3170–3175, 2014. @article{Kremkow2014, Astronomers and physicists noticed centuries ago that visual spatial resolution is higher for dark than light stimuli, but the neuronal mechanisms for this perceptual asymmetry remain unknown. Here we demonstrate that the asymmetry is caused by a neuronal nonlinearity in the early visual pathway. We show that neurons driven by darks (OFF neurons) increase their responses roughly linearly with luminance decrements, independent of the background luminance. However, neurons driven by lights (ON neurons) saturate their responses with small increases in luminance and need bright backgrounds to approach the linearity of OFF neurons. We show that, as a consequence of this difference in linearity, receptive fields are larger in ON than OFF thalamic neurons, and cortical neurons are more strongly driven by darks than lights at low spatial frequencies. This ON/OFF asymmetry in linearity could be demonstrated in the visual cortex of cats, monkeys, and humans and in the cat visual thalamus. Furthermore, in the cat visual thalamus, we show that the neuronal nonlinearity is present at the ON receptive field center of ON-center neurons and ON receptive field surround of OFF-center neurons, suggesting an origin at the level of the photoreceptor. These results demonstrate a fundamental difference in visual processing between ON and OFF channels and reveal a competitive advantage for OFF neurons over ON neurons at low spatial frequencies, which could be important during cortical development when retinal images are blurred by immature optics in infant eyes. |
André Krügel; Ralf Engbert A model of saccadic landing positions in reading under the influence of sensory noise Journal Article In: Visual Cognition, vol. 22, no. 3, pp. 334–353, 2014. @article{Kruegel2014, During reading, saccadic eye movements are produced to move the high acuity foveal region of the eye to words of interest for efficient word processing. Distributions of saccadic landing positions peak close to a word's centre but are relatively broad compared to simple oculomotor tasks. Moreover, landing-position distributions are modulated both by distance of the launch site and by saccade type (e.g., one-step saccade, word skipping, refixation). Here we present a mathematical model for the computation of a saccade intended for a given target word. Two fundamental assumptions are related to (1) the sensory computation of the word centre from inter- word spaces and (2) the integration of sensory information and a priori knowledge using Bayesian estimation. Our model was developed for data from a large corpus of eye movements from normal reading. We demonstrate that the model is able simultaneously to account for a systematic shift of saccadic mean landing position with increasing launch-site distance and for qualitative differences between one-step saccades (i.e., from a given word to the next word) and word-skipping saccades. |
Wouter Kruijne; Stefan Van der Stigchel; Martijn Meeter A model of curved saccade trajectories: Spike rate adaptation in the brainstem as the cause of deviation away Journal Article In: Brain and Cognition, vol. 85, no. 1, pp. 259–270, 2014. @article{Kruijne2014, The trajectory of saccades to a target is often affected whenever there is a distractor in the visual field. Distractors can cause a saccade to deviate towards their location or away from it. The oculomotor mechanisms that produce deviation towards distractors have been thoroughly explored in behavioral, neurophysiological and computational studies. The mechanisms underlying deviation away, on the other hand, remain unclear. Behavioral findings suggest a mechanism of spatially focused, top-down inhibition in a saccade map, and deviation away has become a tool to investigate such inhibition. However, this inhibition hypothesis has little neuroanatomical or neurophysiological support, and recent findings go against it. Here, we propose that deviation away results from an unbalanced saccade drive from the brainstem, caused by spike rate adaptation in brainstem long-lead burst neurons. Adaptation to stimulation in the direction of the distractor results in an unbalanced drive away from it. An existing model of the saccade system was extended with this theory. The resulting model simulates a wide range of findings on saccade trajectories, including findings that have classically been interpreted to support inhibition views. Furthermore, the model replicated the effect of saccade latency on deviation away, but predicted this effect would be absent with large (400. ms) distractor-target onset asynchrony. This prediction was confirmed in an experiment, which demonstrates that the theory both explains classical findings on saccade trajectories and predicts new findings. |
James H. Kryklywy; Derek G. V. Mitchell Emotion modulates allocentric but not egocentric stimulus localization: implications for dual visual systems perspectives Journal Article In: Experimental Brain Research, vol. 232, no. 12, pp. 3719–3726, 2014. @article{Kryklywy2014, Considerable evidence suggests that emotional cues influence processing prioritization and neural representations of stimuli. Specifically, within the visual domain, emotion is known to impact ventral stream processes and ventral stream-mediated behaviours; it remains unclear, however, the extent to which emotion impacts dorsal stream processes. In the present study, participants localized a visual target stimulus embedded within a background array utilizing allocentric localization (requiring an object-centred representation of visual space to perform an action) and egocentric localization (requiring purely target-directed actions), which are thought to differentially rely on the ventral versus dorsal visual stream, respectively. Simultaneously, a task-irrelevant negative, positive or neutral sound was presented to produce an emotional context. In line with predictions, we found that during allocentric localization, response accuracy was enhanced in the context of negative compared to either neutral or positive sounds. In contrast, no significant effects of emotion were identified during egocentric localization. These results raise the possibility that negative emotional auditory contexts enhance ventral stream, but not dorsal stream, processing in the visual domain. Furthermore, this study highlights the complexity of emotion-cognition interactions, indicating how emotion can have a differential impact on almost identical overt behaviours that may be governed by distinct neurocognitive systems. |
Anuenue Kukona; Gerry T. M. Altmann; Yuki Kamide Knowing what, where, and when: Event comprehension in language processing Journal Article In: Cognition, vol. 133, no. 1, pp. 25–31, 2014. @article{Kukona2014, We investigated the retrieval of location information, and the deployment of attention to these locations, following (described) event-related location changes. In two visual world experiments, listeners viewed arrays with containers like a bowl, jar, pan, and jug, while hearing sentences like "The boy will pour the sweetcorn from the bowl into the jar, and he will pour the gravy from the pan into the jug. And then, he will taste the sweetcorn". At the discourse-final "sweetcorn", listeners fixated context-relevant "Target" containers most (jar). Crucially, we also observed two forms of competition: listeners fixated containers that were not directly referred to but associated with "sweetcorn" (bowl), and containers that played the same role as Targets (goals of moving events; jug), more than distractors (pan). These results suggest that event-related location changes are encoded across representations that compete for comprehenders' attention, such that listeners retrieve, and fixate, locations that are not referred to in the unfolding language, but related to them via object or role information. |
Delphine Lévy-Bencheton; Denis Pélisson; Muriel T. N. Panouillères; Christian Urquizar; Caroline Tilikete; Laure Pisella Adaptation of scanning saccades co-occurs in different coordinate systems Journal Article In: Journal of Neurophysiology, vol. 111, no. 12, pp. 2505–2515, 2014. @article{LevyBencheton2014, Plastic changes of saccades (i.e., following saccadic adaptation) do not transfer between oppositely directed saccades, except when multiple directions are trained simultaneously, suggesting a saccadic planning in retinotopic coordinates. Interestingly, a recent study in human healthy subjects revealed that after an adaptive increase of rightward-scanning saccades, both leftward and rightward double-step, memory-guided saccades, triggered toward the adapted endpoint, were modified, revealing that target location was coded in spatial coordinates (Zimmermann et al. 2011). However, as the computer screen provided a visual frame, one alternative hypothesis could be a coding in allocentric coordinates. Here, we questioned whether adaptive modifications of saccadic planning occur in multiple coordinate systems. We reproduced the paradigm of Zimmermann et al. (2011) using target light-emitting diodes in the dark, with and without a visual frame, and tested different saccades before and after adaptation. With double-step, memory-guided saccades, we reproduced the transfer of adaptation to leftward saccades with the visual frame but not without, suggesting that the coordinate system used for saccade planning, when the frame is visible, is allocentric rather than spatiotopic. With single-step, memory-guided saccades, adaptation transferred to leftward saccades, both with and without the visual frame, revealing a target localization in a coordinate system that is neither retinotopic nor allocentric. Finally, with single-step, visually guided saccades, the classical, unidirectional pattern of amplitude change was reproduced, revealing retinotopic coordinate coding. These experiments indicate that the same procedure of adaptation modifies saccadic planning in multiple coordinate systems in parallel-each of them revealed by the use of different saccade tasks in postadaptation. |
Xingshan Li; Klinton Bicknell; Pingping Liu; Wei Wei; Keith Rayner In: Journal of Experimental Psychology: General, vol. 143, no. 2, pp. 895–913, 2014. @article{Li2014, While much previous work on reading in languages with alphabetic scripts has suggested that reading is word-based, reading in Chinese has been argued to be less reliant on words. This is primarily because in the Chinese writing system words are not spatially segmented, and characters are themselves complex visual objects. Here, we present a systematic characterization of the effects of a wide range of word and character properties on eye movements in Chinese reading, using a set of mixed-effects regression models. The results reveal a rich pattern of effects of the properties of the current, previous, and next words on a range of reading measures, which is strikingly similar to the pattern of effects of word properties reported in spaced alphabetic languages. This finding provides evidence that reading shares a word-based core and may be fundamentally similar across languages with highly dissimilar scripts. We show that these findings are robust to the inclusion of character properties in the regression models and are equally reliable when dependent measures are defined in terms of characters rather than words, providing strong evidence that word properties have effects in Chinese reading above and beyond characters. This systematic characterization of the effects of word and character properties in Chinese advances our knowledge of the processes underlying reading and informs the future development of models of reading. More generally, however, this work suggests that differences in script may not alter the fundamental nature of reading. |
Chiuhsiang Joe Lin; Chi-Chan Chang; Yung-Hui Lee Evaluating camouflage design using eye movement data Journal Article In: Applied Ergonomics, vol. 45, no. 3, pp. 714–723, 2014. @article{Lin2014d, This study investigates the characteristics of eye movements during a camouflaged target search task. Camouflaged targets were randomly presented on two natural landscapes. The performance of each camouflage design was assessed by target detection hit rate, detection time, number of fixations on display, first saccade amplitude to target, number of fixations on target, fixation duration on target, and subjective ratings of search task difficulty. The results showed that the camouflage patterns could significantly affect the eye-movement behavior, especially first saccade amplitude and fixation duration, and the findings could be used to increase the sensitivity of the camouflage assessment. We hypothesized that the assessment could be made with regard to the differences in detectability and discriminability of the camouflage patterns. These could explain less efficient search behavior in eye movements. Overall, data obtained from eye movements can be used to significantly enhance the interpretation of the effects of different camouflage design. |
Chiuhsiang Joe Lin; Chi-Chan Chang; Bor-Shong Liu Developing and evaluating a target-background similarity metric for camouflage detection Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e87310, 2014. @article{Lin2014e, BACKGROUND: Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. METHODOLOGY: In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. SIGNIFICANCE: The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results. |
Hai Lin; Joshua D. Rizak; Yuan-ye Ma; Shang-chuan Yang; Lin Chen; Xin-tian Hu Face recognition increases during saccade preparation Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e93112, 2014. @article{Lin2014, Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition. |
Cai S. Longman; Aureliu Lavric; Cristian Munteanu; Stephen Monsell Attentional inertia and delayed orienting of spatial attention in task-switching Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 4, pp. 1580–1602, 2014. @article{Longman2014, Among the potential, but neglected, sources of task-switch costs is the need to reallocate attention to different attributes or objects. Even theorists who recognize the importance of attentional resetting in task-switching sometimes think it too efficient to result in significant behavioral costs. We examined the dynamics of spatial attention in a task-cuing paradigm using eye-tracking. Digits appeared simultaneously at 3 locations. A cue preceded this display by a variable interval, instructing the performance of 1 of 3 classification tasks (odd-even, low-high, inner-outer) each consistently associated with a location, so that task preparation could be tracked via fixation of the task-relevant location. Task-switching led to a delay in selecting the relevant location and a tendency to misallocate attention; the previously relevant location attracted attention much more than the other irrelevant location on switch trials, indicating "inertia" in attentional parameters rather than mere distractibility. These effects predicted reaction time switch costs within and over participants. The switch-induced delay was not confined to trials with slow/late orienting, but characteristic of most switch trials. The attentional pull of the previously relevant location was substantially reduced, but not eliminated, by extending the preparation interval to more than 1 sec, suggesting that attentional inertia contributes to the "residual" switch cost. A control condition, using identical displays but only 1 task, showed that these effects could not be attributed to the (small and transient) delays or inertia observed when the required orientation changed between trials in the absence of a task change. |
Lester C. Loschky; Ryan V. Ringer; Aaron P. Johnson; Adam M. Larson; Mark B. Neider; Arthur F. Kramer Blur detection is unaffected by cognitive load Journal Article In: Visual Cognition, vol. 22, no. 3-4, pp. 522–547, 2014. @article{Loschky2014, Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects ofselective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N-back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N-back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N-back level on N-back performance, scene recognition memory, and gaze dispersion, but no effect of N-back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze- contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N-back task. |
Matthew W. Lowder; Peter C. Gordon Effects of animacy and noun-phrase relatedness on the processing of complex sentences Journal Article In: Memory & Cognition, vol. 42, no. 5, pp. 794–805, 2014. @article{Lowder2014, Previous work has suggested that syntactically complex object-extracted relative clauses are easier to process when the head noun phrase (NP1) is inanimate and the embedded noun phrase (NP2) is animate, as compared with the reverse animacy configuration, with differences in processing difficulty beginning as early as NP2 (e.g., The article that the senator . . . vs. The senator that the article . . .). Two eye-tracking-while-reading experiments were conducted to better understand the source of this effect. Experiment 1 showed that having an inanimate NP1 facilitated processing even when NP2 was held constant. Experiment 2 manipulated both animacy of NP1 and the degree of semantic relatedness between the critical NPs. When NP1 and NP2 were paired arbitrarily, the early animacy effect emerged at NP2. When NP1 and NP2 were semantically related, this effect disappeared, with effects of NP1 animacy emerging in later processing stages for both the related and arbitrary conditions. The results indicate that differences in the animacy of NP1 influence early processing of complex sentences only when the critical NPs share no meaningful relationship. |
Steven J. Luck; Clara McClenon; Valerie M. Beck; Andrew Hollingworth; Carly J. Leonard; Britta Hahn; Benjamin M. Robinson; James M. Gold Hyperfocusing in schizophrenia: Evidence from interactions between working memory and eye movements Journal Article In: Journal of Abnormal Psychology, vol. 123, no. 4, pp. 783–795, 2014. @article{Luck2014, Recent research suggests that processing resources are focused more narrowly but more intensely in people with schizophrenia (PSZ) than in healthy control subjects (HCS), possibly reflecting local cortical circuit abnormalities. This hyperfocusing hypothesis leads to the counterintuitive prediction that, although PSZ cannot store as much information in working memory as HCS, the working memory representations that are present in PSZ may be more intense than those in HCS. To test this hypothesis, we used a task in which participants make a saccadic eye movement to a peripheral target and avoid a parafoveal nontarget while they are holding a color in working memory. Previous research with this task has shown that the parafoveal nontarget is more distracting when it matches the color being held in working memory. This effect should be enhanced in PSZ if their working memory representations are more intense. Consistent with this prediction, we found that the effect of a match between the distractor color and the memory color was larger in PSZ than in HCS. We also observed evidence that PSZ hyperfocused spatially on the region surrounding the fixation point. These results provide further evidence that some aspects of cognitive dysfunction in schizophrenia may be a result of a narrower and more intense focusing of processing resources. |
Casimir J. H. Ludwig; J. Rhys Davies; Miguel P. Eckstein Foveal analysis and peripheral selection during active visual sampling Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 2, pp. E291–E299, 2014. @article{Ludwig2014, Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. |
Arthur J. Lugtigheid; Laurie M. Wilcox; Robert S. Allison; Ian P. Howard Vergence eye movements are not essential for stereoscopic depth Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 281, pp. 1–7, 2014. @article{Lugtigheid2014, The brain receives disparate retinal input owing to the separation of the eyes, yet we usually perceive a single fused world. This is because of complex interactions between sensory and oculomotor processes that quickly act to reduce excessive retinal disparity. This implies a strong link between depth perception and fusion, but it is well established that stereoscopic depth percepts are also obtained from stimuli that produce double images. Surprisingly, the nature of depth percepts from such diplopic stimuli remains poorly understood. Specifically, despite long-standing debate it is unclear whether depth under diplopia is owing to the retinal disparity (directly), or whether the brain interprets signals from fusional vergence responses to large disparities (indirectly). Here, we addressed this question using stereoscopic afterimages, for which fusional vergence cannot provide retinal feedback about depth. We showed that observers could reliably recover depth sign and magnitude from diplopic afterimages. In addition, measuring vergence responses to large disparity stimuli revealed that that the sign and magnitude of vergence responses are not systematically related to the target disparity, thus ruling out an indirect explanation of our results. Taken together, our research provides the first conclusive evidence that stereopsis is a direct process, even for diplopic targets. |
Katerina Lukasova; Jens Sommer; Mariana P. Nucci-Da-Silva; Gilson Vieira; Marius Blanke; Frank Bremmer; João R. Sato; Tilo Kircher; Edson Amaro Test-retest reliability of fMRI activation generated by different saccade tasks Journal Article In: Journal of Magnetic Resonance Imaging, vol. 40, no. 1, pp. 37–46, 2014. @article{Lukasova2014, PURPOSE: To assess the reproducibility of brain-activation and eye-movement patterns in a saccade paradigm when comparing subjects, tasks, and magnetic resonance (MR) systems. MATERIALS AND METHODS: Forty-five healthy adults at two different sites (n = 45) performed saccade tasks with varying levels of target predictability: predictable (PRED), position predictable (pPRED), time predictable (tPRED), and prosaccade (SAC). Eye-movement pattern was tested with a repeated-measures analysis of variance. Activation maps reproducibility were estimated with the cluster overlap Jaccard index and signal variance coefficient of determination for within-subjects test-retest data, and for between-subjects data from the same and different sites. RESULTS: In all groups latencies increased with decreasing target predictability: PRED < pPRED < tPRED < SAC (P < 0,001). Activation overlap was good to fair (>0.40) in all tasks in the within-subjects test-retest comparisons and poor (<0.40) in the tPRED for different subjects. The overlap of the different tasks for within-groups data was higher (0.40-0.68) than for the between-groups data (0.30-0.50). Activation consistency was 60-85% in the same subjects, 50-79% in different subjects, and 50-80% in different sites. In SAC, the activation found in the same and in different subjects was more consistent than in other tasks (50-80%). CONCLUSION: The predictive saccade tasks produced evidence for brain-activation and eye-movement reproducibility |
Steven G. Luke; Tim J. Smith; Joseph Schmidt; John M. Henderson Dissociating temporal inhibition of return and saccadic momentum across multiple eye-movement tasks Journal Article In: Journal of Vision, vol. 14, no. 14, pp. 1–12, 2014. @article{Luke2014, Saccade latencies are longer prior to an eye movement to a recently fixated location than to control locations, a phenomenon known as oculomotor inhibition of return (O-IOR). There are theoretical reasons to expect that O-IOR would vary in magnitude across different eye movement tasks, but previous studies have produced contradictory evidence. However, this may have been because previous studies have not dissociated O-IOR and a related phenomenon, saccadic momentum, which is a bias to repeat saccade programs that also influences saccade latencies. The present study dissociated the influence of O-IOR and saccadic momentum across three complex visual tasks: scene search, scene memorization, and scene aesthetic preference. O-IOR was of similar magnitude across all three tasks, while saccadic momentum was weaker in scene search. |
Gang Luo; Tyler W. Garaas; Marc Pomplun Salient stimulus attracts focus of peri-saccadic mislocalization Journal Article In: Vision Research, vol. 100, pp. 93–98, 2014. @article{Luo2014, Visual localization during saccadic eye movements is prone to error. Flashes shortly before and after the onset of saccades are usually perceived to shift towards the saccade target, creating a "compression" pattern. Typically, the saccade landing point coincides with a salient saccade target. We investigated whether the mislocalization focus follows the actual saccade landing point or a salient stimulus. Subjects made saccades to either a target or a memorized location without target. In some conditions, another salient marker was presented between the initial fixation and the saccade landing point. The experiments were conducted on both black and picture backgrounds. The results show that: (a) when a saccade target or a marker (spatially separated from the saccade landing point) was present, the compression pattern of mislocalization was significantly stronger than in conditions without them, for both black and picture background conditions, and (b) the mislocalization focus tended towards the salient stimulus regardless of whether it was the saccade target or the marker. Our results suggest that a salient stimulus presented in the scene may have an attracting effect and therefore contribute to the non-uniformity of saccadic mislocalization of a probing flash. |
Richard Kunert; Christoph Scheepers Speed and accuracy of dyslexic versus typical word recognition: An eye-movement investigation Journal Article In: Frontiers in Psychology, vol. 5, pp. 1129, 2014. @article{Kunert2014, Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye-tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words' rhyme consistency and the non-words' lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed-related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy-related effects for our understanding of dyslexic word recognition. |
I. Kurki; Miguel P. Eckstein Template changes with perceptual learning are driven by feature informativeness Journal Article In: Journal of Vision, vol. 14, no. 11, pp. 1–18, 2014. @article{Kurki2014, Perceptual learning changes the way the human visual system processes stimulus information. Previous studies have shown that the human brain's weightings of visual information (the perceptual template) become better matched to the optimal weightings. However, the dynamics of the template changes are not well understood. We used the classification image method to investigate whether visual field or stimulus properties govern the dynamics of the changes in the perceptual template. A line orientation discrimination task where highly informative parts were placed in the peripheral visual field was used to test three hypotheses: (1) The template changes are determined by the visual field structure, initially covering stimulus parts closer to the fovea and expanding tward the periphery with learning; (2) the template changes are object centered, starting from the center and expanding toward edges; and (3) the template changes are determined by stimulus information, starting from the most informative parts and expanding to less informative parts. Results show that, initially, the perceptual template contained only the more peripheral, highly informative parts. Learning expanded the template to include less informative parts, resulting in an increase in sampling efficiency. A second experiment interleaved parts with high and low signal-to-noise ratios and showed that template reweighting through learning was restricted to stimulus elements that are spatially contiguous to parts with initial high template weights. The results suggest that the informativeness of features determines how the perceptual template changes with learning. Further, the template expansion is constrained by spatial proximity. |
Chigusa Kurumada; Meredith Brown; Sarah Bibyk; Daniel F. Pontillo; Michael K. Tanenhaus Is it or isn't it: Listeners make rapid use of prosody to infer speaker meanings Journal Article In: Cognition, vol. 133, no. 2, pp. 335–342, 2014. @article{Kurumada2014, A visual world experiment examined the time course for pragmatic inferences derived from visual context and contrastive intonation contours. We used the construction It looks like an X pronounced with either (a) a H* pitch accent on the final noun and a low boundary tone, or (b) a contrastive L + H* pitch accent and a rising boundary tone, a contour that can support contrastive inference (e.g., It LOOKSL+H* like a zebraL-H%...(but it is not)). When the visual display contained a single related set of contrasting pictures (e.g. a zebra vs. a zebralike animal), effects of LOOKSL+H*emerged prior to the processing of phonemic information from the target noun. The results indicate that the prosodic processing is incremental and guided by contextually-supported expectations. Additional analyses ruled out explanations based on context-independent heuristics that might substitute for online computation of contrast. |
MiYoung Kwon; Pinglei Bao; Rachel Millin; Bosco S. Tjan Radial-tangential anisotropy of crowding in the early visual areas Journal Article In: Journal of Neurophysiology, vol. 112, no. 10, pp. 2413–2422, 2014. @article{Kwon2014, Crowding, the inability to recognize an individual object in clutter (Bouma H. Nature 226: 177–178, 1970), is considered a major impediment to object recognition in peripheral vision. Despite its significance, the cortical loci of crowding are not well understood. In particular, the role of the primary visual cortex (V1) remains unclear. Here we utilize a diagnostic feature of crowding to identify the earliest cortical locus of crowding. Controlling for other factors, radially arranged flankers induce more crowding than tangentially arranged ones (Toet A, Levi DM. Vision Res 32: 1349–1357, 1992). We used functional magnetic resonance imaging (fMRI) to measure the change in mean blood oxygenation level-dependent (BOLD) response due to the addition of a middle letter between a pair of radially or tangen- tially arranged flankers. Consistent with the previous finding that crowding is associated with a reduced BOLD response [Millin R, Arman AC, Chung ST, Tjan BS. Cereb Cortex (July 5, 2013). doi:10.1093/cercor/bht159], we found that the BOLD signal evoked by the middle letter depended on the arrangement of the flankers: less BOLD response was associated with adding the middle letter between radially arranged flankers compared with adding it between tangen- tially arranged flankers. This anisotropy in BOLD response was present as early as V1 and remained significant in downstream areas. The effect was observed while subjects' attention was diverted away from the testing stimuli. Contrast detection threshold for the middle letter was unaffected by flanker arrangement, ruling out surround suppression of contrast response as a major factor in the observed BOLD anisotropy. Our findings support the view that V1 contributes to crowding. |
Nayoung Kwon; Patrick Sturt The use of control information in dependency formation: An eye-tracking study Journal Article In: Journal of Memory and Language, vol. 73, no. 1, pp. 59–80, 2014. @article{Kwon2014a, Recent research has shown much evidence that sentence comprehension can be extremely predictive. However, we currently know little about the limits of predictive processing. In the two eye-tracking experiments, we examined whether predictive information in dependency formation is inevitably given priority over a well-known structural preference in syntactic ambiguity resolution. Experiment 1 used sentences including control nouns like order (e.g. After Andrew's order to wash the kids came over to the house). If predictive dependency information is given priority over disambiguation preferences, then readers could immediately interpret the kids as the ones who have been ordered to wash, thus avoiding the garden path at the main verb came. However, garden path effects were found irrespective of control information, although the garden path difficulty was reduced when the lexical control information highlighted the globally correct analysis (as in the above example), relative to when it did not. Experiment 2 replicated these results with adjunct control, where the relevant dependency is obligatory (e.g. After refusing to wash the kids came over to the house). Again, control information did not influence initial disambiguation, but did affect the difficulty of garden path recovery. Overall, the results suggest that there are limitations on the influence of predictive dependency formation on on-line structural disambiguation. |
Juha M. Lahnakoski; Enrico Glerean; Iiro P. Jääskeläinen; Jukka Hyönä; Riitta Hari; Mikko Sams; Lauri Nummenmaa Synchronous brain activity across individuals underlies shared psychological perspectives Journal Article In: NeuroImage, vol. 100, pp. 316–324, 2014. @article{Lahnakoski2014, For successful communication, we need to understand the external world consistently with others. This task requires sufficiently similar cognitive schemas or psychological perspectives that act as filters to guide the selection, interpretation and storage of sensory information, perceptual objects and events. Here we show that when individuals adopt a similar psychological perspective during natural viewing, their brain activity becomes synchronized in specific brain regions. We measured brain activity with functional magnetic resonance imaging (fMRI) from 33 healthy participants who viewed a 10-min movie twice, assuming once a 'social' (detective) and once a 'non-social' (interior decorator) perspective to the movie events. Pearson's correlation coefficient was used to derive multisubject voxelwise similarity measures (inter-subject correlations; ISCs) of functional MRI data. We used k-nearest-neighbor and support vector machine classifiers as well as a Mantel test on the ISC matrices to reveal brain areas wherein ISC predicted the participants' current perspective. ISC was stronger in several brain regions-most robustly in the parahippocampal gyrus, posterior parietal cortex and lateral occipital cortex-when the participants viewed the movie with similar rather than different perspectives. Synchronization was not explained by differences in visual sampling of the movies, as estimated by eye gaze. We propose that synchronous brain activity across individuals adopting similar psychological perspectives could be an important neural mechanism supporting shared understanding of the environment. |
Rogier Landman; Jitendra Sharma; Mriganka Sur; Robert Desimone Effect of distracting faces on visual selective attention in the monkey Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 50, pp. 18037–18042, 2014. @article{Landman2014, In primates, visual stimuli with social and emotional content tend to attract attention. Attention might be captured through rapid, automatic, subcortical processing or guided by slower, more voluntary cortical processing. Here we examined whether irrelevant faces with varied emotional expressions interfere with a covert attention task in macaque monkeys. In the task, the monkeys monitored a target grating in the periphery for a subtle color change while ignoring distracters that included faces appearing elsewhere on the screen. The onset time of distracter faces before the target change, as well as their spatial proximity to the target, was varied from trial to trial. The presence of faces, especially faces with emotional expressions interfered with the task, indicating a competition for attentional resources between the task and the face stimuli. However, this interference was significant only when faces were presented for greater than 200 ms. Emotional faces also affected saccade velocity and reduced pupillary reflex. Our results indicate that the attraction of attention by emotional faces in the monkey takes a considerable amount of processing time, possibly involving cortical–subcortical interactions. Intranasal application of the hormone oxytocin ameliorated the interfering effects of faces. Together these results provide evidence for slow modulation of attention by emotional distracters, which likely involves oxytocinergic brain circuits. |
Alexandre Lang; Chrystal Gaertner; Elham Ghassemi; Qing Yang; Christophe Orssaud; Zoï Kapoula Saccade-vergence properties remain more stable over short-time repetition under overlap than under gap task: A preliminary study Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 372, 2014. @article{Lang2014, Under natural circumstances, saccade-vergence eye movements are among the most frequently occurring. This study examines the properties of such movements focusing on short-term repetition effects. Are such movements robust over time or are they subject to tiredness? 12 healthy adults performed convergent and divergent combined eye movements either in a gap task (i.e., 200 ms between the end of the fixation stimulus and the beginning of the target stimulus) or in an overlap task (i.e., the peripheral target begins 200 ms before the end of the fixation stimulus). Latencies were shorter in the gap task than in the overlap task for both saccade and vergence components. Repetition had no effect on latency, which is a novel result. In both tasks, saccades were initiated later and executed faster (mean and peak velocities) than the vergence component. The mean and peak velocities of both components decreased over trials in the gap task but remained constant in the overlap task. This result is also novel and has some clinical implications. Another novel result concerns the accuracy of the saccade component that was better in the gap than in the overlap task. The accuracy also decreased over trials in the gap task but remained constant in the overlap task. The major result of this study is that under a controlled mode of initiation (overlap task) properties of combined eye movements are more stable than under automatic triggering (gap task). These results are discussed in terms of saccade-vergence interactions, convergence-divergence specificities and repetition versus adaptation protocols. |
Nicholas D. Lange; Daniel R. Buttaccio; Eddy J. Davelaar; Rick P. Thomas Using the memory activation capture (MAC) procedure to investigate the temporal dynamics of hypothesis generation Journal Article In: Memory & Cognition, vol. 42, no. 2, pp. 264–274, 2014. @article{Lange2014, Research investigating top-down capture has demonstrated a coupling of working memory content with attention and eye movements. By capitalizing on this relationship, we have developed a novel methodology, called the memory activation capture (MAC) procedure, for measuring the dynamics of working memory content supporting complex cognitive tasks (e.g., decision making, problem solving). The MAC procedure employs briefly presented visual arrays containing task-relevant information at critical points in a task. By observing which items are preferentially fixated, we gain a measure of working memory content as the task evolves through time. The efficacy of the MAC procedure was demonstrated in a dynamic hypothesis generation task in which some of its advantages over existing methods for measuring changes in the contents of working memory over time are highlighted. In two experiments, the MAC procedure was able to detect the hypothesis that was retrieved and placed into working memory. Moreover, the results from Experiment 2 suggest a two-stage process following hypothesis retrieval, whereby the hypothesis undergoes a brief period of heightened activation before entering a lower activation state in which it is maintained for output. The results of both experiments are of additional general interest, as they represent the first demonstrations of top-down capture driven by participant-established WM content retrieved from long-term memory. |
K. Lankinen; Jukka Saari; Riitta Hari; Miika Koskinen Intersubject consistency of cortical MEG signals during movie viewing Journal Article In: NeuroImage, vol. 92, pp. 217–224, 2014. @article{Lankinen2014, According to recent functional magnetic resonance imaging (fMRI) studies, spectators of a movie may share similar spatiotemporal patterns of brain activity. We aimed to extend these findings of intersubject correlation to temporally accurate single-trial magnetoencephalography (MEG). A silent 15-min black-and-white movie was shown to eight subjects twice. We adopted a spatial filtering model and estimated its parameter values by using multi-set canonical correlation analysis (M-CCA) so that the intersubject correlation was maximized. The procedure resulted in multiple (mutually uncorrelated) time-courses with statistically significant intersubject correlations at frequencies below 10 Hz; the maximum correlation was 0.28 ± 0.075 in the ≤1 Hz band. Moreover, the 24-Hz frame rate elicited steady-state responses with statistically significant intersubject correlations up to 0.29 ± 0.12. To assess the brain origin of the across-subjects correlated signals, the time-courses were correlated with minimum-norm source current estimates (MNEs) projected to the cortex. The time series implied across-subjects synchronous activity in the early visual, posterior and inferior parietal, lateral temporooccipital, and motor cortices, and in the superior temporal sulcus (STS) bilaterally. These findings demonstrate the capability of the proposed methodology to uncover cortical MEG signatures from single-trial signals that are consistent across spectators of a movie. |
Axel Larsen Deconstructing mental rotation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 3, pp. 1072–1091, 2014. @article{Larsen2014, A random walk model of the classical mental rotation task is explored in two experiments. By assuming that a mental rotation is repeated until sufficient evidence for a match/mismatch is obtained, the model accounts for the approximately linearly increasing reaction times (RTs) on positive trials, flat RTs on negative trials, false alarms and miss rates, effects of complexity, and for the number of eye movement switches between stimuli as functions of angular difference in orientation. Analysis of eye movements supports key aspects of the model and shows that initial processing time is roughly constant until the first saccade switch between stimulus objects, while the duration of the remaining trial increases approximately linearly as a function of angular discrepancy. The increment results from additive effects of (a) a linear increase in the number of saccade switches between stimulus objects, (b) a linear increase in the number of saccades on a stimulus, and (c) a linear increase in the number and in the duration of fixations on a stimulus object. The fixation duration increment was the same on simple and complex trials (about 15 ms per 60°), which suggests that the critical orientation alignment take place during fixations at very high speed. |
Adam M. Larson; Tyler E. Freeman; Ryan V. Ringer; Lester C. Loschky The spatiotemporal dynamics of scene gist recognition Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 2, pp. 471–487, 2014. @article{Larson2014, Viewers can rapidly extract a holistic semantic representation of a real-world scene within a single eye fixation, an ability called recognizing the gist of a scene, and operationally defined here as recognizing an image's basic-level scene category. However, it is unknown how scene gist recognition unfolds over both time and space-within a fixation and across the visual field. Thus, in 3 experiments, the current study investigated the spatiotemporal dynamics of basic-level scene categorization from central vision to peripheral vision over the time course of the critical first fixation on a novel scene. The method used a window/scotoma paradigm in which images were briefly presented and processing times were varied using visual masking. The results of Experiments 1 and 2 showed that during the first 100 ms of processing, there was an advantage for processing the scene category from central vision, with the relative contributions of peripheral vision increasing thereafter. Experiment 3 tested whether this pattern could be explained by spatiotemporal changes in selective attention. The results showed that manipulating the probability of information being presented centrally or peripherally selectively maintained or eliminated the early central vision advantage. Across the 3 experiments, the results are consistent with a zoom-out hypothesis, in which, during the first fixation on a scene, gist extraction extends from central vision to peripheral vision as covert attention expands outward. |
Nida Latif; Arlene Gehmacher; Monica S. Castelhano; Kevin G. Munhall The art of gaze guidance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 40, no. 1, pp. 33–39, 2014. @article{Latif2014, An ongoing challenge in scene perception is identifying the factors that influence how we explore our visual world. By using multiple versions of paintings as a tool to control for high-level influences, we show that variation in the visual details of a painting causes differences in observers' gaze despite constant task and content. Further, we show that by switching locations of highly salient regions through textural manipulation, a corresponding switch in eye movement patterns is observed. Our results present the finding that salient regions and gaze behavior are not simply correlated; variation in saliency through textural differences causes an observer to direct their viewing accordingly. This work demonstrates the direct contribution of low-level factors in visual exploration by showing that examination of a scene, even for aesthetic purposes, can be easily manipulated by altering the low-level properties and hence, the saliency of the scene. |
Claudio Lavín; René San Martín; Eduardo Rosales Jubal Pupil dilation signals uncertainty and surprise in a learning gambling task Journal Article In: Frontiers in Behavioral Neuroscience, vol. 7, pp. 218, 2014. @article{Lavin2014, Pupil dilation under constant illumination is a physiological marker where modulation is related to several cognitive functions involved in daily decision making. There is evidence for a role of pupil dilation change during decision-making tasks associated with uncertainty, reward-prediction errors and surprise. However, while some work suggests that pupil dilation is mainly modulated by reward predictions, others point out that this marker is related to uncertainty signaling and surprise. Supporting the latter hypothesis, the neural substrate of this marker is related to noradrenaline (NA) activity which has been also related to uncertainty signaling. In this work we aimed to test whether pupil dilation is a marker for uncertainty and surprise in a learning task. We recorded pupil dilation responses in 10 participants performing the Iowa Gambling Task (IGT), a decision-making task that requires learning and constant monitoring of outcomes' feedback, which are important variables within the traditional study of human decision making. Results showed that pupil dilation changes were modulated by learned uncertainty and surprise regardless of feedback magnitudes. Interestingly, greater pupil dilation changes were found during positive feedback (PF) presentation when there was lower uncertainty about a future negative feedback (NF); and by surprise during NF presentation. These results support the hypothesis that pupil dilation is a marker of learned uncertainty, and may be used as a marker of NA activity facing unfamiliar situations in humans. |
Rebecca P. Lawson; Ben Seymour; Eleanor Loh; Antoine Lutti; Raymond J. Dolan; Peter Dayan; Nikolaus Weiskopf; Jonathan P. Roiser The habenula encodes negative motivational value associated with primary punishment in humans Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 32, pp. 11858–11863, 2014. @article{Lawson2014, Learning what to approach, and what to avoid, involves assigning value to environmental cues that predict positive and negative events. Studies in animals indicate that the lateral habenula encodes the previously learned negative motivational value of stimuli. However, involvement of the habenula in dynamic trial-by-trial aversive learning has not been assessed, and the functional role of this structure in humans remains poorly characterized, in part, due to its small size. Using high-resolution functional neuroimaging and computational modeling of reinforcement learning, we demonstrate positive habenula responses to the dynamically changing values of cues signaling painful electric shocks, which predict behavioral suppression of responses to those cues across individuals. By contrast, negative habenula responses to monetary reward cue values predict behavioral invigoration. Our findings show that the habenula plays a key role in an online aversive learning system and in generating associated motivated behavior in humans. |
Stephen Layfield; Wesley Burge; William G. Mitchell; Lesley A. Ross; Christine Denning; Frank Amthor; Kristina M. Visscher The effect of speed of processing training on microsaccade amplitude Journal Article In: PLoS ONE, vol. 9, no. 9, pp. e107808, 2014. @article{Layfield2014, Older adults experience cognitive deficits that can lead to driving errors and a loss of mobility. Fortunately, some of these deficits can be ameliorated with targeted interventions which improve the speed and accuracy of simultaneous attention to a central and a peripheral stimulus called Speed of Processing training. To date, the mechanisms behind this effective training are unknown. We hypothesized that one potential mechanism underlying this training is a change in distribution of eye movements of different amplitudes. Microsaccades are small amplitude eye movements made when fixating on a stimulus, and are thought to counteract the "visual fading" that occurs when static stimuli are presented. Due to retinal anatomy, larger microsaccadic eye movements are needed to move a peripheral stimulus between receptive fields and counteract visual fading. Alternatively, larger microsaccades may decrease performance due to neural suppression. Because larger microsaccades could aid or hinder peripheral vision, we examine the distribution of microsaccades during stimulus presentation. Our results indicate that there is no statistically significant change in the proportion of large amplitude microsaccades during a Useful Field of View-like task after training in a small sample of older adults. Speed of Processing training does not appear to result in changes in microsaccade amplitude, suggesting that the mechanism underlying Speed of Processing training is unlikely to rely on microsaccades. |
Ada Le; Matthias Niemeier Visual field preferences of object analysis for grasping with one hand Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 782, 2014. @article{Le2014, When we grasp an object using one hand, the opposite hemisphere predominantly guides the motor control of grasp movements (Davare et al., 2007; Rice et al., 2007). However, it is unclear whether visual object analysis for grasp control relies more on inputs (a) from the contralateral than the ipsilateral visual field, (b) from one dominant visual field regardless of the grasping hand, or (c) from both visual fields equally. For bimanual grasping of a single object we have recently demonstrated a visual field preference for the left visual field (Le and Niemeier, 2013a,b), consistent with a general right-hemisphere dominance for sensorimotor control of bimanual grasps (Le et al., 2014). But visual field differences have never been tested for unimanual grasping. Therefore, here we asked right-handed participants to fixate to the left or right of an object and then grasp the object either with their right or left hand using a precision grip. We found that participants grasping with their right hand performed better with objects in the right visual field: maximum grip apertures (MGAs) were more closely matched to the object width and were smaller than for objects in the left visual field. In contrast, when people grasped with their left hand, preferences switched to the left visual field. What is more, MGA scaling with the left hand showed greater visual field differences compared to right-hand grasping. Our data suggest that, visual object analysis for unimanual grasping shows a preference for visual information from the ipsilateral visual field, and that the left hemisphere is better equipped to control grasps in both visual fields. |
Chia-lin Lee; Daniel Mirman; Laurel J. Buxbaum Abnormal dynamics of activation of object use information in apraxia: Evidence from eyetracking Journal Article In: Neuropsychologia, vol. 59, no. 1, pp. 13–26, 2014. @article{Lee2014, Action representations associated with object use may be incidentally activated during visual object processing, and the time course of such activations may be influenced by lexical-semantic context (e.g., Lee, Middleton, Mirman, Kalénine, & Buxbaum (2012). Journal of Experimental Psychology: Human Perception and Performance, 39(1), 257-270). In this study we used the "visual world" eye-tracking paradigm to examine whether a deficit in producing skilled object-use actions (apraxia) is associated with abnormalities in incidental activation of action information, and assessed the neuroanatomical substrates of any such deficits. Twenty left hemisphere stroke patients, ten of whom were apraxic, performed a task requiring identification of a named object in a visual display containing manipulation-related and unrelated distractor objects. Manipulation relationships among objects were not relevant to the identification task. Objects were cued with neutral ("S/he saw the. . .."), or action-relevant ("S/he used the. . ..") sentences. Non-apraxic participants looked at use-related non-target objects significantly more than at unrelated non-target objects when cued both by neutral and action-relevant sentences, indicating that action information is incidentally activated. In contrast, apraxic participants showed delayed activation of manipulation-based action information during object identification when cued by neutral sentences. The magnitude of delayed activation in the neutral sentence condition was reliably predicted by lower scores on a test of gesture production to viewed objects, as well as by lesion loci in the inferior parietal and posterior temporal lobes. However, when cued by a sentence containing an action verb, apraxic participants showed fixation patterns that were statistically indistinguishable from non-apraxic controls. In support of grounded theories of cognition, these results suggest that apraxia and temporal-parietal lesions may be associated with abnormalities in incidental activation of action information from objects. Further, they suggest that the previously-observed facilitative role of action verbs in the retrieval of object-related action information extends to participants with apraxia. |
Dongpyo Lee; Howard Poizner; Daniel M. Corcos; Denise Y. P. Henriques Unconstrained reaching modulates eye-hand coupling Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 211–223, 2014. @article{Lee2014b, Eye–hand coordination is a crucial element of goal-directed movements. However, few studies have looked at the extent to which unconstrained movements of the eyes and hand made to targets influence each other. We studied human participants who moved either their eyes or both their eyes and hand to one of three static or flashed targets presented in 3D space. The eyes were directed, and hand was located at a common start position on either the right or left side of the body. We found that the velocity and scatter of memory-guided saccades (flashed targets) differed significantly when produced in combination with a reaching movement than when produced alone. Specifically, when accompanied by a reach, peak saccadic velocities were lower than when the eye moved alone. Peak saccade velocities, as well as latencies, were also highly correlated with those for reaching movements, especially for the briefly flashed targets compared to the continuous visible target. The scatter of saccade endpoints was greater when the saccades were produced with the reaching movement than when produced without, and the size of the scatter for both saccades and reaches was weakly correlated. These findings suggest that the saccades and reaches made to 3D targets are weakly to moderately coupled both temporally and spatially and that this is partly the result of the arm movement influencing the eye movement. Taken together, this study provides further evidence that the oculomotor and arm motor systems interact above and beyond any common target representations shared by the two motor systems. |
Kang Woo Lee; Yubu Lee Scanpath generated by cue-driven activation and spatial strategy: A comparative study Journal Article In: Cognitive Computation, vol. 6, no. 3, pp. 585–594, 2014. @article{Lee2014a, A comparative study of a cued face search task is presented in this paper. Human participants and a computer model carried out a task in which they were required to locate a color-cued target face. Human-generated eye fixations and scanpaths were compared with those generated by the computational model. Throughout the comparison, we considered the similarities and dissimilarities between the two systems' performances. Their results show that the eye fixations in a valid cue search are highly correlated with the computer-generated fixation points in a valid cue search but not to those in random and invalid cue searches. Moreover, the comparison between human- and computer-generated scanpaths showed that the scanpath that links the fixation points is not randomly generated. Our results imply that eye movement is accomplished not only by cue-driven activation, but also by a spatial strategy. |
Timothy Leffel; Miriam Lauter; Masha Westerlund; Liina Pylkkänen Restrictive vs. non-restrictive composition: A magnetoencephalography study Journal Article In: Language, Cognition and Neuroscience, vol. 29, no. 10, pp. 1191–1204, 2014. @article{Leffel2014, Recent research on the brain mechanisms underlying language processing has implicated the left anterior temporal lobe (LATL) as a central region for the composition of simple phrases. Because these studies typically present their critical stimuli without contextual information, the sensitivity of LATL responses to contextual factors is unknown. In this magnetoencephalography (MEG) study, we employed a simple question-answer paradigm to manipulate whether a prenominal adjective or determiner is interpreted restrictively, i.e., as limiting the set of entities under discussion. Our results show that the LATL is sensitive to restriction, with restrictive composition eliciting higher responses than non-restrictive composition. However, this effect was only observed when the restricting element was a determiner, adjectival stimuli showing the opposite pattern, which we hypothesise to be driven by the special pragmatic properties of non-restrictive adjectives. Overall, our results demonstrate a robust sensitivity of the LATL to high level contextual and potentially also pragmatic factors. |
Carly J. Leonard; Benjamin M. Robinson; Britta Hahn; James M. Gold; Steven J. Luck Enhanced distraction by magnocellular salience signals in schizophrenia Journal Article In: Neuropsychologia, vol. 56, no. 1, pp. 359–366, 2014. @article{Leonard2014, Research on schizophrenia has provided evidence of both impaired attentional control and dysfunctional magnocellular sensory processing. The present study tested the hypothesis that these impairments may be related, such that people with schizophrenia would be differentially distracted by stimuli that strongly activate the magnocellular pathway. To accomplish this, we used a visual attention paradigm from the basic cognitive neuroscience literature designed to assess the capture of attention by salient but irrelevant stimuli. Participants searched for a target shape in an array of non-target shapes. On some trials, a salient distractor was presented that either selectively activated the parvocellular system (parvo-biased distractors) or activated both the magnocellular and parvocellular systems (magno+parvo distractors). For both manual reaction times and eye movement measures, the magno+parvo distractors captured attention more strongly than the parvo-biased distractors in people with schizophrenia, but the opposite pattern was observed in matched healthy control participants. These results indicate that attentional control deficits in schizophrenia may arise, at least in part, by means of an interaction with magnocellular sensory dysfunction. |
Benjamin D. Lester; Paul Dassonville The role of the right superior parietal lobule in processing visual context for the establishment of the egocentric reference frame Journal Article In: Journal of Cognitive Neuroscience, vol. 26, no. 10, pp. 2201–2209, 2014. @article{Lester2014, Visual cues contribute to the creation of an observerʼs ego- centric reference frame, within which the locations and orien- tations of objects can be judged. However, these cues can also be misleading. In the rod-and-frame illusion, for example, a large tilted frame distorts the observerʼs sense of vertical, caus- ing an enclosed rod to appear tilted in the opposite direction. To determine the brain region responsible for processing these spatial cues, we used TMS to suppress neural activity in the superior parietal lobule of healthy observers. Stimulation of the right hemisphere, but not the left, caused a significant reduc- tion in rod-and-frame susceptibility. In contrast, a tilt illusion caused by a mechanism that does not involve a distortion of the observerʼs egocentric reference frame was unaffected. These results demonstrate that the right superior parietal lobule is actively involved in processing the contextual cues that contribute to our perception of egocentric space. |
Chi Yui Leung; Masatoshi Sugiura; Daisuke Abe; Lisa Yoshikawa The perceptual span in second language reading: An eye-tracking study using a gaze-contingent moving window paradigm Journal Article In: Open Journal of Modern Linguistics, vol. 4, pp. 585–594, 2014. @article{Leung2014, The perceptual span, which is the visual area providing useful information to a reader during eye fixation, has been well investigated among native or first language (L1) readers, but not among second language (L2) readers. Our goal was to investigate the size of the perceptual span among Japanese university students who learn English as a foreign language (EFL) to investigate parafoveal processing during L2 reading. In an experiment using the gaze-contingent moving window paradigm, we compared perceptual span between Japanese EFL readers (N = 42) and native English L1 readers (N = 14). Our results showed that (1) the EFL readers had a smaller perceptual span than the L1 readers did, and (2) the facilitating effect of parafoveal information was greater for faster EFL readers than it was for slower EFL readers. These findings provide evidence that EFL readers can only utilize little parafoveal information during fixation when compared with L1 readers. |
Nadine Kloth; Susannah E. Shields; Gillian Rhodes On the other side of the fence: Effects of social categorization and spatial grouping on memory and attention for own-race and other-race faces Journal Article In: PLoS ONE, vol. 9, no. 9, pp. e105979, 2014. @article{Kloth2014, The term ‘‘own-race bias'' refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own- race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own-university faces. Results are discussed in light of the theoretical accounts of the own-race bias. |
Zuzanna Klyszejko; Masih Rahmati; Clayton E. Curtis Attentional priority determines working memory precision Journal Article In: Vision Research, vol. 105, pp. 70–76, 2014. @article{Klyszejko2014, Visual working memory is a system used to hold information actively in mind for a limited time. The number of items and the precision with which we can store information has limits that define its capacity. How much control do we have over the precision with which we store information when faced with these severe capacity limitations? Here, we tested the hypothesis that rank-ordered attentional priority determines the precision of multiple working memory representations. We conducted two psychophysical experiments that manipulated the priority of multiple items in a two-alternative forced choice task (2AFC) with distance discrimination. In Experiment 1, we varied the probabilities with which memorized items were likely to be tested. To generalize the effects of priority beyond simple cueing, in Experiment 2, we manipulated priority by varying monetary incentives contingent upon successful memory for items tested. Moreover, we illustrate our hypothesis using a simple model that distributed attentional resources across items with rank-ordered priorities. Indeed, we found evidence in both experiments that priority affects the precision of working memory in a monotonic fashion. Our results demonstrate that representations of priority may provide a mechanism by which resources can be allocated to increase the precision with which we encode and briefly store information. |
Pia Knoeferle Conjunction meaning can modulate parallelism facilitation: Eye-tracking evidence from German clausal coordination Journal Article In: Journal of Memory and Language, vol. 75, pp. 140–158, 2014. @article{Knoeferle2014, In and-coordinated clauses, the second conjunct elicits faster reading times when it parallels (vs. does not parallel) the first in constituent order. This paper examined whether such parallelism facilitation results from simple constituent order priming from the first to the second clause, or whether it can be modulated through the linguistic context (the conjunction and clausal relations). Three eye-tracking experiments on German assessed this issue by manipulating conjunction meaning and type within subjects (resemblance: 'and' vs. adversative: 'but' or 'while'; coordinating: 'and' and 'but'; subordinating: 'while'), and by varying the clausal relations between experiments. Clausal parallelism facilitation was reduced when syntactic dependence of the clauses from a superordinate verb reinforced their coherence, and semantic expectations for 'but' and 'while' were violated through the parallel constituent order and thematic role relations of noun phrases. By contrast, it was not reduced when the same expectations were satisfied through other sentence constituents (temporally contrastive adverbs) and when the coordination involved matrix clauses. The contextual modulation of parallelism facilitation rules out simple priming as the only underlying mechanism. The observed facilitation rather reflects compositional processing of the coordinands and the conjunction in the linguistic context. |
Kathryn Koehler; Fei Guo; Sheng Zhang; Miguel P. Eckstein What do saliency models predict? Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–27, 2014. @article{Koehler2014, Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. |
Thorsten Kolling; Gabriella Óturai; Monika Knopf Is selective attention the basis for selective imitation in infants: An eye-tracking study of deferred imitation with 12-month-olds. Journal Article In: Journal of Experimental Child Psychology, vol. 124, no. 1, pp. 18–35, 2014. @article{Kolling2014, Infants and children do not blindly copy every action they observe during imitation tasks. Research demonstrated that infants are efficient selective imitators. The impact of selective perceptual processes (selective attention) for selective deferred imitation, however, is still poorly described. The current study, therefore, analyzed 12-month-old infants' looking behavior during demonstration of two types of target actions: arbitrary versus functional actions. A fully automated remote eye tracker was used to assess infants' looking behavior during action demonstration. After a 30-min delay, infants' deferred imitation performance was assessed. Next to replicating a memory effect, results demonstrate that infants do imitate significantly more functional actions than arbitrary actions (functionality effect). Eye-tracking data show that whereas infants do not fixate significantly longer on functional actions than on arbitrary actions, amount of fixations and amount of saccades differ between functional and arbitrary actions, indicating different encoding mechanisms. In addition, item-level findings differ from overall findings, indicating that perceptual and conceptual item features influence looking behavior. Looking behavior on both the overall and item levels, however, does not relate to deferred imitation performance. Taken together, the findings demonstrate that, on the one hand, selective imitation is not explainable merely by selective attention processes. On the other hand, notwithstanding this reasoning, attention processes on the item level are important for encoding processes during target action demonstration. Limitations and future studies are discussed. |
Oleg V. Komogortsev; Corey D. Holland; Alex Karpov; Larry R. Price Biometrics via oculomotor plant characteristics: Impact of parameters in oculomotor plant model Journal Article In: ACM Transactions on Applied Perception, vol. 11, no. 4, pp. 1–17, 2014. @article{Komogortsev2014, This article proposes and evaluates a novel biometric approach utilizing the internal, nonvisible, anatomical structure of the human eye. The proposed method estimates the anatomical properties of the human oculomotor plant from the measurable properties of human eye movements, utilizing a two-dimensional linear homeomorphic model of the oculomotor plant. The derived properties are evaluated within a biometric framework to determine their efficacy in both verification and identification scenarios. The results suggest that the physical properties derived from the oculomotor plant model are capable of achieving 20.3% equal error rate and 65.7% rank-1 identification rate on high-resolution equipment involving 32 subjects, with biometric samples taken over four recording sessions; or 22.2% equal error rate and 12.6% rank-1 identification rate on low-resolution equipment involving 172 subjects, with biometric samples taken over two recording sessions. |
Agnieszka E. Konopka; Antje S. Meyer Priming sentence planning Journal Article In: Cognitive Psychology, vol. 73, pp. 1–40, 2014. @article{Konopka2014, Sentence production requires mapping preverbal messages onto linguistic structures. Because sentences are normally built incrementally, the information encoded in a sentence-initial increment is critical for explaining how the mapping process starts and for predicting its timecourse. Two experiments tested whether and when speakers prioritize encoding of different types of information at the outset of formulation by comparing production of descriptions of transitive events (e.g., A dog is chasing the mailman) that differed on two dimensions: the ease of naming individual characters and the ease of apprehending the event gist (i.e., encoding the relational structure of the event). To additionally manipulate ease of encoding, speakers described the target events after receiving lexical primes (facilitating naming; Experiment 1) or structural primes (facilitating generation of a linguistic structure; Experiment 2). Both properties of the pictured events and both types of primes influenced the form of target descriptions and the timecourse of formulation: character-specific variables increased the probability of speakers encoding one character with priority at the outset of formulation, while the ease of encoding event gist and of generating a syntactic structure increased the likelihood of early encoding of information about both characters. The results show that formulation is flexible and highlight some of the conditions under which speakers might employ different planning strategies. |
Christof Körner; Verena Braunstein; Matthias Stangl; Alois Schlögl; Christa Neuper; Anja Ischebeck In: Psychophysiology, vol. 51, no. 4, pp. 385–395, 2014. @article{Koerner2014, To search for a target in a complex environment is an everyday behavior that ends with finding the target. When we search for two identical targets, however, we must continue the search after finding the first target and memorize its location. We used fixation-related potentials to investigate the neural correlates of different stages of the search, that is, before and after finding the first target. Having found the first target influenced subsequent distractor processing. Compared to distractor fixations before the first target fixation, a negative shift was observed for three subsequent distractor fixations. These results suggest that processing a target in continued search modulates the brain's response, either transiently by reflecting temporary working memory processes or permanently by reflecting working memory retention. |
Christof Körner; Margit Höfler; Barbara Tröbinger; Iain D. Gilchrist Eye movements indicate the temporal organisation of information processing in graph comprehension Journal Article In: Applied Cognitive Psychology, vol. 28, no. 3, pp. 360–373, 2014. @article{Koerner2014a, Hierarchical graphs (e.g. file system browsers and preference trees) represent objects (e.g. files and folders) as graph nodes and relations between them (e.g. sub-folder relations) as lines. We investigated the temporal organisation of two processes that are necessary for comprehending such graphs—search for the graph nodes and reasoning about their relation. We tracked eye movements to change graphs while participants interpreted them. In Experiment 1, we masked the graph at a time when search processes had finished but reasoning was hypothetically ongoing. We observed a dramatic deterioration in comprehension compared with unmasked graphs. In Experiment 2, we changed the relation between critical graph nodes after search for them had finished, unbeknownst to participants. Participants mostly based their response on the graph as presented after the change. These results suggest that comprehension processes are organised in a sequential manner, an observation that can potentially be applied to the interactive presentation of graphs. |
Yoshito Kosai; Yasmine El-shamayleh; Amber M. Fyall; Anitha Pasupathy The role of visual area V4 in the discrimination of partially occluded shapes Journal Article In: Journal of Neuroscience, vol. 34, no. 25, pp. 8570–8584, 2014. @article{Kosai2014, The primate brain successfully recognizes objects, even when they are partially occluded. To begin to elucidate the neural substrates of this perceptual capacity, we measured the responses of shape-selective neurons in visual area V4 while monkeys discriminated pairs of shapes under varying degrees of occlusion. We found that neuronal shape selectivity always decreased with increasing occlusion level, with some neurons being notably more robust to occlusion than others. The responses of neurons that maintained their selectivity across a wider range of occlusion levels were often sufficiently sensitive to support behavioral performance. Many of these same neurons were distinctively selective for the curvature of local boundary features and their shape tuning was well fit by a model of boundary curvature (curvature-tuned neurons). A significant subset of V4 neurons also signaled the animal's upcoming behavioral choices; these decision signals had short onset latencies that emerged progressively later for higher occlusion levels. The time course of the decision signals in V4 paralleled that of shape selectivity in curvature-tuned neurons: shape selectivity in curvature-tuned neurons, but not others, emerged earlier than the decision signals. These findings provide evidence for the involvement of contour-based mechanisms in the segmentation and recognition of partially occluded objects, consistent with psychophysical theory. Furthermore, they suggest that area V4 participates in the representation of the relevant sensory signals and the generation of decision signals underlying discrimination. |
James F. Cavanagh; Thomas V. Wiecki; Angad Kochar; Michael J. Frank Eye tracking and pupillometry are indicators of dissociable latent decision processes Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 4, pp. 1476–1488, 2014. @article{Cavanagh2014, Can you predict what people are going to do just by watching them? This is certainly difficult: it would require a clear mapping between observable indicators and unobservable cognitive states. In this report, we demonstrate how this is possible by monitoring eye gaze and pupil dilation, which predict dissociable biases during decision making. We quantified decision making using the drift diffusion model (DDM), which provides an algorithmic account of how evidence accumulation and response caution contribute to decisions through separate latent parameters of drift rate and decision threshold, respectively. We used a hierarchical Bayesian estimation approach to assess the single trial influence of observable physiological signals on these latent DDM parameters. Increased eye gaze dwell time specifically predicted an increased drift rate toward the fixated option, irrespective of the value of the option. In contrast, greater pupil dilation specifically predicted an increase in decision threshold during difficult decisions. These findings suggest that eye tracking and pupillometry reflect the operations of dissociated latent decision processes. |
Dario Cazzoli; Chrystalina A. Antoniades; Christopher Kennard; Thomas Nyffeler; Claudio L. Bassetti; René M. Müri Eye movements discriminate fatigue due to chronotypical factors and time spent on task - A double dissociation Journal Article In: PLoS ONE, vol. 9, no. 1, pp. e87146, 2014. @article{Cazzoli2014, Systematic differences in circadian rhythmicity are thought to be a substantial factor determining inter-individual differences in fatigue and cognitive performance. The synchronicity effect (when time of testing coincides with the respective circadian peak period) seems to play an important role. Eye movements have been shown to be a reliable indicator of fatigue due to sleep deprivation or time spent on cognitive tasks. However, eye movements have not been used so far to investigate the circadian synchronicity effect and the resulting differences in fatigue. The aim of the present study was to assess how different oculomotor parameters in a free visual exploration task are influenced by: a) fatigue due to chronotypical factors (being a 'morning type' or an 'evening type'); b) fatigue due to the time spent on task. Eighteen healthy participants performed a free visual exploration task of naturalistic pictures while their eye movements were recorded. The task was performed twice, once at their optimal and once at their non-optimal time of the day. Moreover, participants rated their subjective fatigue. The non-optimal time of the day triggered a significant and stable increase in the mean visual fixation duration during the free visual exploration task for both chronotypes. The increase in the mean visual fixation duration correlated with the difference in subjectively perceived fatigue at optimal and non-optimal times of the day. Conversely, the mean saccadic speed significantly and progressively decreased throughout the duration of the task, but was not influenced by the optimal or non-optimal time of the day for both chronotypes. The results suggest that different oculomotor parameters are discriminative for fatigue due to different sources. A decrease in saccadic speed seems to reflect fatigue due to time spent on task, whereas an increase in mean fixation duration a lack of synchronicity between chronotype and time of the day. |
Myriam Chanceaux; Jonathan Grainger Effects of number, complexity, and familiarity of flankers on crowded letter identification Journal Article In: Journal of Vision, vol. 14, no. 2014, pp. 1–17, 2014. @article{Chanceaux2014, We tested identification of target letters surrounded by a varying number (2, 4, 6) of horizontally aligned flanking elements. Strings were presented left or right of a central fixation dot, and targets were always at the center of the string. Flankers could be other letters, digits, symbols, simple shapes, or false fonts, and thus varied both in terms of visual complexity and familiarity. Two-alternative forced choice (2AFC) speed and accuracy was measured for choosing the target letter versus an alternative letter that was not present in the string. Letter identification became harder as the number of flankers increased. Greater flanker complexity led to more interference in target identification, whereas more complex targets were easier to identify. Effects of flanker complexity were found to depend on visual field and position of flankers, with the strongest effects seen for leftward flankers in the left visual field. Visual complexity predicted flanker interference better than familiarity, and better than target-flanker similarity. These results provide further support for an excessive feature- integration account of the interfering effects of both adjacent and nonadjacent flanking elements in horizontally aligned strings. |
Myriam Chanceaux; Anne Guérin-Dugué; Benoît Lemaire; Thierry Baccino A computational cognitive model of information search in textual materials Journal Article In: Cognitive Computation, vol. 6, no. 1, pp. 1–17, 2014. @article{Chanceaux2014a, Document foraging for information is a crucial and increasingly prevalent activity nowadays. We designed a computational cognitive model to simulate the oculomotor scanpath of an average web user searching for specific information from textual materials. In particular, the developed model dynamically combines visual, semantic, and memory processes to predict the user's focus of attention during information seeking from paragraphs of text. A series of psychological experiments was conducted using eye-tracking techniques in order to validate and refine the proposed model. Comparisons between model simulations and human data are reported and discussed taking into account the strengths and shortcomings of the model. The proposed model provides a unique contribution to the investigation of the cognitive processes involved during information search and bears significant implications for web page design and evaluation. |
Samuel G. Charlton; Nicola J. Starkey; John A. Perrone; Robert B. Isler What's the risk? A comparison of actual and perceived driving risk Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 25, no. A, pp. 50–64, 2014. @article{Charlton2014, It has long been presumed that drivers' perceptions of risk play an important role in guiding on-road behaviour. The answer to how accurately drivers perceive the momentary risk of a driving situation, however, is unknown. This research compared drivers' perceptions of the momentary risk for a range of roads to the objective risk associated with those roads. Videos of rural roads, filmed from the drivers' perspective, were presented to 69 participants seated in a driving simulator while they indicated the momentary levels of risk they were experiencing by moving a risk meter mounted on the steering wheel. Estimates of the objective levels of risk for the roads were calculated using road protection scores from the KiwiRAP database (part of the International Road Assessment Programme). Subsequently, the participants also provided risk estimates for still photos taken from the videos. Another group of 10 participants viewed the videos and photos while their eye movements and fixations were recorded. In a third experiment, 14 participants drove a subset of the roads in a car while providing risk ratings at selected points of interest. Results showed a high degree of consistency across the different methods. Certain road situations were rated as being riskier than the objective risk, and perhaps more importantly, the risk of other situations was significantly under-rated. Horizontal curves and narrow lanes were associated with over-rated risk estimates, while intersections and roadside hazards such as narrow road shoulders, power poles and ditches were significantly under-rated. Analysis of eye movements indicated that drivers did not fixate these features and that the spread of fixations, pupil size and eye blinks were significantly correlated with the risk ratings. An analysis of the road design elements at 77 locations in the video revealed five road characteristics that predicted nearly 80% of the variance in drivers' risk perceptions; horizontal curvature, lane and shoulder width, gradient, and the presence of median barriers. |
Samuel W. Cheadle; Semir Zeki The role of parietal cortex in the formation of color and motion based concepts Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 535, 2014. @article{Cheadle2014, Imaging evidence shows that separate subdivisions of parietal cortex, in and around the intraparietal sulcus (IPS), are engaged when stimuli are grouped according to color and to motion (Zeki and Stutters, 2013). Since grouping is an essential step in the formation of concepts, we wanted to learn whether parietal cortex is also engaged in the formation of concepts according to these two attributes. Using functional magnetic resonance imaging (fMRI), and choosing the recognition of concept-based color or motion stimuli as our paradigm, we found that there was strong concept-related activity in and around the IPS, a region whose homolog in the macaque monkey is known to receive direct but segregated anatomical inputs from V4 and V5. Parietal activity related to color concepts was juxtaposed but did not overlap with activity related to motion concepts, thus emphasizing the continuation of the segregation of color and motion into the conceptual system. Concurrent retinotopic mapping experiments showed that within the parietal cortex, concept-related activity increases within later stage IPS areas. |
Samuel Cheadle; Valentin Wyart; Konstantinos Tsetsos; Nicholas E. Myers; Vincent DeGardelle; Santiago Herce Castañón; Christopher Summerfield Adaptive gain control during human perceptual choice Journal Article In: Neuron, vol. 81, no. 6, pp. 1429–1441, 2014. @article{Cheadle2014a, Neural systems adapt to background levels of stimulation. Adaptive gain control has been extensively studied in sensory systems but overlooked in decision-theoretic models. Here, we describe evidence for adaptive gain control during the serial integration of decision-relevant information. Human observers judged the average information provided by a rapid stream of visual events (samples). The impact that each sample wielded over choices depended on its consistency with the previous sample, with more consistent or expected samples wielding the greatest influence over choice. This bias was also visible in the encoding of decision information in pupillometric signals and in cortical responses measured with functional neuroimaging. These data can be accounted for with a serial sampling model in which the gain of information processing adapts rapidly to reflect the average of the available evidence. |
Jiaqing Chen; Matthias Niemeier Do head-on-trunk signals modulate disengagement of spatial attention? Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 147–157, 2014. @article{Chen2014, Body schema is indispensable for sensorimotor control and learning, but whether it is associated with cognitive functions, such as allocation of spatial attention, remains unclear. Observations in patients with unilateral spatial neglect support this view, yet data from neurologically normal participants are inconsistent. Here, we investigated the influence of head-on-trunk positions (30° left or right, straight ahead) on disengagement of attention in healthy participants. Five experiments examined the effects of valid or invalid cues on spatial shifts of attention using the Posner paradigm. Experiment 1 used a forced-choice task. Participants quickly reported the location of a target that appeared left or right of the fixation point, preceded by a cue on the same (valid) or opposite side (invalid). Experiments 2, 3, and 4 also used valid and invalid cues but required participants to simply detect a target appearing on the left or right side. Experiment 5 used a speeded discrimination task, in which participants quickly reported the orientation of a Gabor. We observed expected influences of validity and stimulus onset asynchrony as well as inhibition of return; however, none of the experiments suggested that head-on-trunk position created or changed visual field advantages, contrary to earlier reports. Our results showed that the manipulations of the body schema did not modulate attentional processes in the healthy brain, unlike neuropsychological studies on neglect patients. Our findings suggest that spatial neglect reflects a state of the lesioned brain that is importantly different from that of the normally functioning brain. |
Nigel T. M. Chen; Patrick J. F. Clarke; Tamara L. Watson; Colin MacLeod; Adam J. Guastella Biased saccadic responses to emotional stimuli in anxiety: An antisaccade study Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e86474, 2014. @article{Chen2014b, Research suggests that anxiety is maintained by an attentional bias to threat, and a growing base of evidence suggests that anxiety may additionally be associated with the deficient attentional processing of positive stimuli. The present study sought to examine whether such anxiety-linked attentional biases were associated with either stimulus driven or attentional control mechanisms of attentional selectivity. High and low trait anxious participants completed an emotional variant of an antisaccade task, in which they were required to prosaccade towards, or antisaccade away from a positive, neutral or threat stimulus, while eye movements were recorded. While low anxious participants were found to be slower to saccade in response to positive stimuli, irrespectively of whether a pro- or antisaccade was required, such a bias was absent in high anxious individuals. Analysis of erroneous antisaccades further revealed at trend level, that anxiety was associated with reduced peak velocity in response to threat. The findings suggest that anxiety is associated with the aberrant processing of positive stimuli, and greater compensatory efforts in the inhibition of threat. The findings further highlight the relevance of considering saccade peak velocity in the assessment of anxiety-linked attentional processing. |
Sheng-Chang Chen; Hsiao-Ching She; Ming-Hua Chuang; Jiun-Yu Wu; Jie-Li Tsai; Tzyy-Ping Jung Eye movements predict students' computer-based assessment performance of physics concepts in different presentation modalities Journal Article In: Computers and Education, vol. 74, pp. 61–72, 2014. @article{Chen2014a, Despite decades of studies on the link between eye movements and human cognitive processes, the exact nature of the link between eye movements and computer-based assessment performance still remains unknown. To bridge this gap, the present study investigates whether human eye movement dynamics can predict computer-based assessment performance (accuracy of response) in different presentation modalities (picture vs. text). Eye-tracking system was employed to collect 63 college students' eye movement behaviors while they are engaging in the computer-based physics concept questions presented as either pictures or text. Students' responses were collected immediately after the picture or text presentations in order to determine the accuracy of responses. The results demonstrated that students' eye movement behavior can successfully predict their computer-based assessment performance. Remarkably, the mean fixation duration has the greatest power to predict the likelihood of responding the correct physics concepts successfully, followed by re-reading time in proportion. Additionally, the mean saccade distance has the least and negative power to predict the likelihood of responding the physics concepts correctly in the picture presentation. Interestingly, pictorial presentations appear to convey physics concepts more quickly and efficiently than do textual presentations. This study adds empirical evidence of a prediction model between eye movement behaviors and successful cognitive performance. Moreover, it provides insight into the modality effects on students' computer-based assessment performance through the use of eye movement behavior evidence. |
Xiaorong Cheng; Qi Yang; Yaqian Han; Xianfeng Ding; Zhao Fan Capacity limit of simultaneous temporal processing: How many concurrent 'clocks' in vision? Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e91797, 2014. @article{Cheng2014, A fundamental ability for humans is to monitor and process multiple temporal events that occur at different spatial locations simultaneously. A great number of studies have demonstrated simultaneous temporal processing (STP) in human and animal participants, i.e., multiple 'clocks' rather than a single 'clock'. However, to date, we still have no knowledge about the exact limitation of the STP in vision. Here we provide the first experimental measurement to this critical parameter in human vision by using two novel and complementary paradigms. The first paradigm combines merits of a temporal oddball-detection task and a capacity measurement widely used in the studies of visual working memory to quantify the capacity of STP (CSTP). The second paradigm uses a two-interval temporal comparison task with various encoded spatial locations involved in the standard temporal intervals to rule out an alternative, 'object individuation'-based, account of CSTP, which is measured by the first paradigm. Our results of both paradigms indicate consistently that the capacity limit of simultaneous temporal processing in vision is around 3 to 4 spatial locations. Moreover, the binding of the 'local clock' and its specific location is undermined by bottom-up competition of spatial attention, indicating that the time-space binding is resource-consuming. Our finding that the capacity of STP is not constrained by the capacity of visual working memory (VWM) supports the idea that the representations of STP are likely stored and operated in units different from those of VWM. A second paradigm confirms further that the limited number of location-bound 'local clocks' are activated and maintained during a time window of several hundreds milliseconds. |