All EyeLink Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2014 |
Danijela Trenkic; Jelena Mirkovic; Gerry T. M. Altmann Real-time grammar processing by native and non-native speakers: Constructions unique to the second language Journal Article In: Bilingualism: Language and Cognition, vol. 17, no. 2, pp. 237–257, 2014. @article{Trenkic2014, We investigated second language (L2) comprehension of grammatical structures that are unique to the L2, and which are known to cause persistent difficulties in production. A visual-world eye-tracking experiment focused on online comprehension of English articles by speakers of the article-lacking Mandarin, and a control group of English native speakers. The results show that non-native speakers from article-lacking backgrounds can incrementally utilise the information signalled by L2 articles in real time to constrain referential domains and resolve reference more efficiently. The findings support the hypothesis that L2 processing does not always over-rely on pragmatic affordances, and that some morphosyntactic structures unique to the target language can be processed in a targetlike manner in comprehension-despite persistent difficulties with their production. A novel proposal, based on multiple meaning-to-form, but consistent form-to-meaning mappings, is developed to account for such comprehension-production asymmetries. © 2013 Cambridge University Press. |
Alison M. Trude; Melissa C. Duff; Sarah Brown-Schmidt Talker-specific learning in amnesia: Insight into mechanisms of adaptive speech perception Journal Article In: Cortex, vol. 54, no. 1, pp. 117–123, 2014. @article{Trude2014, A hallmark of human speech perception is the ability to comprehend speech quickly and effortlessly despite enormous variability across talkers. However, current theories of speech perception do not make specific claims about the memory mechanisms involved in this process. To examine whether declarative memory is necessary for talker-specific learning, we tested the ability of amnesic patients with severe declarative memory deficits to learn and distinguish the accents of two unfamiliar talkers by monitoring their eye-gaze as they followed spoken instructions. Analyses of the time-course of eye fixations showed that amnesic patients rapidly learned to distinguish these accents and tailored perceptual processes to the voice of each talker. These results demonstrate that declarative memory is not necessary for this ability and points to the involvement of non-declarative memory mechanisms. These results are consistent with findings that other social and accommodative behaviors are preserved in amnesia and contribute to our understanding of the interactions of multiple memory systems in the use and understanding of spoken language. |
Chien-Chih Tseng; Ching-Hui Chen; Hsuan-Chih Chen; Yao-Ting Sung; Kuo-En Chang Verification of Dual Factors theory with eye movements during a matchstick arithmetic insight problem Journal Article In: Thinking Skills and Creativity, vol. 13, pp. 129–140, 2014. @article{Tseng2014, Representational Change Theory claims that participants form inappropriate representations at the beginning of the insight problem solving process and that these initial representations must be transformed to discover the solution (Knoblich, Ohlsson, Haider, & Rhenius, 1999; Knoblich, Ohlsson, & Raney, 2001; Ohlsson, 1992). The theory also claims that all participants are trapped by inappropriate representations, regardless of the result, but it is easier for successful participants to transform their initial representations. However, the transformation of representations is not the only critical factor. This study investigates the hypothesis that the process of fixedness averting plays an important role in insight problem solving and is helpful for representational change. To verify the influence of fixedness averting on representational change processes, matchstick arithmetic problems were employed as an experimental model. In experiment 1, insight problem solving results could be predicted within the first third of the duration of the task. The gaze duration in the fixation region of successful participants was shorter than the gaze duration of unsuccessful participants. In experiment 2, participants' foci of attention were experimentally manipulated by presenting different animated diagrams to guide their attention. We found that the rate of correct responses was significantly reduced when participants' attention was guided to the fixation region. Representational Change Theory declares that changing inappropriate initial representations is necessary for solving insight problems. The present study demonstrates that in addition to representational change, fixedness averting is also crucial to problem solving. |
Lin-Yuan Tseng; Philip Tseng; Wei-Kuang Liang; Daisy L. Hung; Ovid J. L. Tzeng; Neil G. Muggleton; Chi-Hung Juan The role of superior temporal sulcus in the control of irrelevant emotional face processing: A transcranial direct current stimulation study Journal Article In: Neuropsychologia, vol. 64, pp. 124–133, 2014. @article{Tseng2014a, Emotional faces are often salient cues of threats or other important contexts, and may therefore have a large effect on cognitive processes of the visual environment. Indeed, many behavioral studies have demonstrated that emotional information can modulate visual attention and eye movements. The aim of the present study was to investigate (1) how irrelevant emotional face distractors affect saccadic behaviors and (2) whether such emotional effects reflect a specific neural mechanism or merely biased selective attention. We combined a visual search paradigm that incorporated manipulation of different types of distractor (fearful faces or scrambled faces) and delivered anodal transcranial direct current stimulation (tDCS) over the superior temporal sulcus and the frontal eye field to investigate the functional roles of these areas in processing facial expressions and eye movements. Our behavioral data suggest that irrelevant emotional distractors can modulate saccadic behaviors. The tDCS results showed that while rFEF played a more general role in controlling saccadic behavior, rSTS is mainly involved in facial expression processing. Furthermore, rSTS played a critical role in processing facial expressions even when such expressions were not relevant to the task goal, implying that facial expressions and processing may be automatic irrespective of the task goal. |
Yuan-Chi Tseng; Joshua I. Glaser; Eamon Caddigan; Alejandro Lleras Modeling the effect of selection history on pop-out visual search Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e89996, 2014. @article{Tseng2014b, While attentional effects in visual selection tasks have traditionally been assigned "top-down" or "bottom-up" origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task. |
Yusuke Uchida; Nobuaki Mizuguchi; Masaaki Honda; Kazuyuki Kanosue Prediction of shot success for basketball free throws: Visual search strategy Journal Article In: European Journal of Sport Science, vol. 14, no. 5, pp. 426–432, 2014. @article{Uchida2014, Abstract In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed. |
Hiroshi Ueda; Kohske Takahashi; Katsumi Watanabe Influence of removal of invisible fixation on the saccadic and manual gap effect Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 329–336, 2014. @article{Ueda2014, Saccadic and manual reactions to a peripherally presented target are facilitated by removing a central fixation stimulus shortly before a target onset (the gap effect). The present study examined the effects of removal of a visible and invisible fixation point on the saccadic gap effect and the manual gap effect. Participants were required to fixate a central fixation point and respond to a peripherally presented target as quickly and accurately as possible by making a saccade (Experiment 1) or pressing a corresponding key (Experiment 2). The fixation point was dichoptically presented, and visibility was manipulated by using binocular rivalry and continuous flash suppression technique. In both saccade and key-press tasks, removing the visible fixation strongly quickened the responses. Furthermore, the invisible fixation, which remained on the display but suppressed, significantly delayed the saccadic response. Contrarily, the invisible fixation had no effect on the manual task. These results indicate that partially different processes mediate the saccadic gap effect and the manual gap effect. In particular, unconscious processes might modulate an oculomotor-specific component of the saccadic gap effect, presumably via subcortical mechanisms. |
Hiroshi Ueda; Kohske Takahashi; Katsumi Watanabe Effects of direct and averted gaze on the subsequent saccadic response Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 4, pp. 1085–1092, 2014. @article{Ueda2014a, The saccadic latency to visual targets is susceptible to the properties of the currently fixated objects. For example, the disappearance of a fixation stimulus prior to presentation of a peripheral target shortens saccadic latencies (the gap effect). In the present study, we investigated the influences of a social signal from a facial fixation stimulus (i.e., gaze direction) on subsequent saccadic responses in the gap paradigm. In Experiment 1, a cartoon face with a direct or averted gaze was used as a fixation stimulus. The pupils of the face were unchanged (overlap), disappeared (gap), or were translated vertically to make or break eye contact (gaze shift). Participants were required to make a saccade toward a target to the left or the right of the fixation stimulus as quickly as possible. The results showed that the gaze direction influenced saccadic latencies only in the gaze shift condition, but not in the gap or overlap condition; the direct-to-averted gaze shift (i.e., breaking eye contact) yielded shorter saccadic latencies than did the averted-to-direct gaze shift (i.e., making eye contact). Further experiments revealed that this effect was eye contact specific (Exp. 2) and that the appearance of an eye gaze immediately before the saccade initiation also influenced the saccadic latency, depending on the gaze direction (Exp. 3). These results suggest that the latency of target-elicited saccades can be modulated not only by physical changes of the fixation stimulus, as has been seen in the conventional gap effect, but also by a social signal from the attended fixation stimulus. |
Grayden J. F. Solman; Kersondra Hickey; Daniel Smilek Comparing target detection errors in visual search and manually-assisted search Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 4, pp. 945–958, 2014. @article{Solman2014, Subjects searched for low- or high-prevalence targets among static nonoverlapping items or items piled in heaps that could be moved using a computer mouse. We replicated the classical prevalence effect both in visual search and when unpacking items from heaps, with more target misses under low prevalence. Moreover, we replicated our previous finding that while unpacking, people often move the target item without noticing (the unpacking error) and determined that these errors also increase under low prevalence. On the basis of a comparison of item movements during the manually-assisted search and eye movements during static visual search, we suggest that low prevalence leads to broadly reduced diligence during search but that the locus of this reduced diligence depends on the nature of the task. In particular, while misses during visual search often arise from a failure to inspect all of the items, misses during manually-assisted search more often result from a failure to adequately inspect individual items. Indeed, during manually-assisted search, over 90 % of target misses occurred despite subjects having moved the target item during search. |
Grayden J. F. Solman; Alan Kingstone Balancing energetic and cognitive resources: Memory use during search depends on the orienting effector Journal Article In: Cognition, vol. 132, no. 3, pp. 443–454, 2014. @article{Solman2014a, Search outside the laboratory involves tradeoffs among a variety of internal and external exploratory processes. Here we examine the conditions under which item specific memory from prior exposures to a search array is used to guide attention during search. We extend the hypothesis that memory use increases as perceptual search becomes more difficult by turning to an ecologically important type of search difficulty - energetic cost. Using optical motion tracking, we introduce a novel head-contingent display system, which enables the direct comparison of search using head movements and search using eye movements. Consistent with the increased energetic cost of turning the head to orient attention, we discover greater use of memory in head-contingent versus eye-contingent search, as reflected in both timing and orienting metrics. Our results extend theories of memory use in search to encompass embodied factors, and highlight the importance of accounting for the costs and constraints of the specific motor groups used in a given task when evaluating cognitive effects. |
David Souto; Dirk Kerzel Ocular tracking responses to background motion gated by feature-based attention Journal Article In: Journal of Neurophysiology, vol. 112, no. 5, pp. 1074–1081, 2014. @article{Souto2014, Involuntary ocular tracking responses to background motion offer a window on the dynamics of motion computations. In contrast to spatial attention, we know little about the role of feature-based attention in determining this ocular response. To probe feature-based effects of background motion on involuntary eye movements, we presented human observers with a balanced background perturbation. Two clouds of dots moved in opposite vertical directions while observers tracked a target moving in horizontal direction. Additionally, they had to discriminate a change in the direction of motion (±10° from vertical) of one of the clouds. A vertical ocular following response occurred in response to the motion of the attended cloud. When motion selection was based on motion direction and color of the dots, the peak velocity of the tracking response was 30% of the tracking response elicited in a single task with only one direction of background motion. In two other experiments, we tested the effect of the perturbation when motion selection was based on color, by having motion direction vary unpredictably, or on motion direction alone. Although the gain of pursuit in the horizontal direction was significantly reduced in all experiments, indicating a trade-off between perceptual and oculomotor tasks, ocular responses to perturbations were only observed when selection was based on both motion direction and color. It appears that selection by motion direction can only be effective for driving ocular tracking when the relevant elements can be segregated before motion onset. |
Eelke Spaak; Floris P. Lange; Ole Jensen Local entrainment of alpha oscillations by visual stimuli causes cyclic modulation of perception Journal Article In: Journal of Neuroscience, vol. 34, no. 10, pp. 3536–3544, 2014. @article{Spaak2014, Prestimulus oscillatory neural activity in the visual cortex has large consequences for perception and can be influenced by top-down control from higher-order brain regions. Making a causal claim about the mechanistic role of oscillatory activity requires that oscillations be directly manipulated independently of cognitive instructions. There are indications that a direct manipulation, or entrainment, of visual alpha activity is possible through visual stimulation. However, three important questions remain: (1) Can the entrained alpha activity be endogenously maintained in the absence of continuous stimulation?; (2) Does entrainment of alpha activity reflect a global or a local process?; and (3) Does the entrained alpha activity influence perception? To address these questions, we presented human subjects with rhythmic stimuli in one visual hemifield, and arhythmic stimuli in the other. After rhythmic entrainment, we found a periodic pattern in detection performance of near-threshold targets specific to the entrained hemifield. Using magnetoencephalograhy to measure ongoing brain activity, we observed strong alpha activity contralateral to the rhythmic stimulation outlasting the stimulation by several cycles. This entrained alpha activity was produced locally in early visual cortex, as revealed by source analysis. Importantly, stronger alpha entrainment predicted a stronger phasic modulation of detection performance in the entrained hemifield. These findings argue for a cortically focal entrainment of ongoing alpha oscillations by visual stimulation, with concomitant consequences for perception. Our results support the notion that oscillatory brain activity in the alpha band provides a causal mechanism for the temporal organization of visual perception. |
Laura J. Speed; Gabriella Vigliocco Eye movements reveal the dynamic simulation of speed in language Journal Article In: Cognitive Science, vol. 38, no. 2, pp. 367–382, 2014. @article{Speed2014, This study investigates how speed of motion is processed in language. In three eye-tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., The lion ambled/dashed to the balloon). Results showed that looking time to relevant objects in the visual scene was affected by the speed of verb of the sentence, speaking rate, and configuration of a supporting visual scene. The results provide novel evidence for the mental simulation of speed in language and show that internal dynamic simulations can be played out via eye movements toward a static visual scene. |
Sara Spotorno; George L. Malcolm; Benjamin W. Tatler How context information and target information guide the eyes from the first epoch of search in real-world scenes Journal Article In: Journal of Vision, vol. 14, no. 2, pp. 1–21, 2014. @article{Spotorno2014, This study investigated how the visual system utilizes context and task information during the different phases of a visual search task. The specificity of the target template (the picture or the name of the target) and the plausibility of target position in real-world scenes were manipulated orthogonally. Our findings showed that both target template information and guidance of spatial context are utilized to guide eye movements from the beginning of scene inspection. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the level of detail of target template, and was quicker in the case of a picture cue. The results indicate that the visual system can utilize target template guidance and context guidance flexibly from the beginning of scene inspection, depending upon the amount and the quality of the available information supplied by either of these high- level sources. This allows for optimization of oculomotor behavior throughout the different phases of search within a real-world scene. |
Steven D. Stagg; Karina J. Linnell; Pamela Heaton Investigating eye movement patterns, language, and social ability in children with autism spectrum disorder Journal Article In: Development and Psychopathology, vol. 26, no. 2, pp. 529–537, 2014. @article{Stagg2014, Although all intellectually high-functioning children with autism spectrum disorder (ASD) display core social and communication deficits, some develop language within a normative timescale and others experience significant delays and subsequent language impairment. Early attention to social stimuli plays an important role in the emergence of language, and reduced attention to faces has been documented in infants later diagnosed with ASD. We investigated the extent to which patterns of attention to social stimuli would differentiate early and late language onset groups. Children with ASD (mean age = 10 years) differing on language onset timing (late/normal) and a typically developing comparison group completed a task in which visual attention to interacting and noninteracting human figures was mapped using eye tracking. Correlations on visual attention data and results from tests measuring current social and language ability were conducted. Patterns of visual attention did not distinguish typically developing children and ASD children with normal language onset. Children with ASD and late language onset showed significantly reduced attention to salient social stimuli. Associations between current language ability and social attention were observed. Delay in language onset is associated with current language skills as well as with specific eye-tracking patterns. |
Hanneke Bouwsema; Corry K. Sluis; Raoul M. Bongers Changes in performance over time while learning to use a myoelectric prosthesis Journal Article In: Journal of NeuroEngineering and Rehabilitation, vol. 11, no. 1, pp. 1–15, 2014. @article{Bouwsema2014, BACKGROUND: Training increases the functional use of an upper limb prosthesis, but little is known about how people learn to use their prosthesis. The aim of this study was to describe the changes in performance with an upper limb myoelectric prosthesis during practice. The results provide a basis to develop an evidence-based training program. METHODS: Thirty-one able-bodied participants took part in an experiment as well as thirty-one age- and gender-matched controls. Participants in the experimental condition, randomly assigned to one of four groups, practiced with a myoelectric simulator for five sessions in a two-weeks period. Group 1 practiced direct grasping, Group 2 practiced indirect grasping, Group 3 practiced fixating, and Group 4 practiced a combination of all three tasks. The Southampton Hand Assessment Procedure (SHAP) was assessed in a pretest, posttest, and two retention tests. Participants in the control condition performed SHAP two times, two weeks apart with no practice in between. Compressible objects were used in the grasping tasks. Changes in end-point kinematics, joint angles, and grip force control, the latter measured by magnitude of object compression, were examined. RESULTS: The experimental groups improved more on SHAP than the control group. Interestingly, the fixation group improved comparable to the other training groups on the SHAP. Improvement in global position of the prosthesis leveled off after three practice sessions, whereas learning to control grip force required more time. The indirect grasping group had the smallest object compression in the beginning and this did not change over time, whereas the direct grasping and the combination group had a decrease in compression over time. Moreover, the indirect grasping group had the smallest grasping time that did not vary over object rigidity, while for the other two groups the grasping time decreased with an increase in object rigidity. CONCLUSIONS: A training program should spend more time on learning fine control aspects of the prosthetic hand during rehabilitation. Moreover, training should start with the indirect grasping task that has the best performance, which is probably due to the higher amount of useful information available from the sound hand. |
Oliver Boxell Lexical fillers permit real-time gap-search inside island domains Journal Article In: Journal of Cognitive Science, vol. 14, no. 1, pp. 97–136, 2014. @article{Boxell2014, It has often been reported that lexical fillers (e.g. which house) improve the overall acceptability of many island constraint violations relative to bare fillers (e.g. what). The current study attempts to test for the first time whether lexical fillers reduce real-time sensitivity to wh-islands as well. Results from an eyetracking-while-reading study are reported that demonstrate native English speakers' sensitivity to a plausibility manipulation between a fronted filler phrase and a downstream subcategorizing verb inside a wh-island domain. The effect is found as the verb was encountered in real-time, and only when the filler element contains lexical information, not when it is bare. This is taken to show that online sensitivity to the wh-island constraint is reduced when the filler preceding it is lexical. The strengths and weaknesses and overall compatibility of a range of grammatical and processing theories are considered in relation to this finding. |
C. Bradley; Jared Abrams; Wilson S. Geisler Retina-V1 model of detectability across the visual field Journal Article In: Journal of Vision, vol. 14, no. 12, pp. 1–22, 2014. @article{Bradley2014, A practical model is proposed for predicting the detectability of targets at arbitrary locations in the visual field, in arbitrary gray scale backgrounds, and under photopic viewing conditions. The major factors incorporated into the model include (a) the optical point spread function of the eye, (b) local luminance gain control (Weber's law), (c) the sampling array of retinal ganglion cells, (d) orientation and spatial frequency-dependent contrast masking, (e) broadband contrast masking, and (f) efficient response pooling. The model is tested against previously reported threshold measurements on uniform backgrounds (the ModelFest data set and data from Foley, Varadharajan, Koh, & Farias, 2007) and against new measurements reported here for several ModelFest targets presented on uniform, 1/f noise, and natural backgrounds at retinal eccentricities ranging from 0 degrees to 10 degrees. Although the model has few free parameters, it is able to account quite well for all the threshold measurements. |
John Brand; Aaron P. Johnson Attention to local and global levels of hierarchical Navon figures affects rapid scene categorization Journal Article In: Frontiers in Psychology, vol. 5, pp. 1274, 2014. @article{Brand2014, In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks |
Eduard Brandstätter; Christof Körner Attention in risky choice Journal Article In: Acta Psychologica, vol. 152, pp. 166–176, 2014. @article{Brandstaetter2014, Previous research on the processes involved in risky decisions has rarely linked process data to choice directly. We used a simple measure based on the relative amount of attentional deployment to different components (gains/losses and their probabilities) of a risky gamble during the choice process, and we related this measure to the actual choice. In an experiment we recorded the decisions, decision times, and eye movements of 80 participants who made decisions on 11 choice problems. We used the number of eye fixations and fixation transitions to trace the deployment of attention during the choice process and obtained the following main results. First, different components of a gamble attracted different amounts of attention depending on participants' actual choice. This was reflected in both the number of fixations and the fixation transitions. Second, the last-fixated gamble but not the last-fixated reason predicted participants' choices. Third, a comparison of data obtained with eye tracking and data obtained with verbal protocols from a previous study showed a large degree of convergence regarding the process of risky choice. Together these findings tend to support dimensional decision strategies such as the priority heuristic. |
D. J. Bridge; Joel L. Voss Hippocampal binding of novel information with dominant memory traces can support both memory stability and change Journal Article In: Journal of Neuroscience, vol. 34, no. 6, pp. 2203–2213, 2014. @article{Bridge2014, Memory stability and change are considered opposite outcomes. We tested the counterintuitive notion that both depend on one process: hippocampal binding of memory features to associatively novel information, or associative novelty binding (ANB). Building on the idea that dominant memory features, or “traces,” are most susceptible to modification, we hypothesized that ANB would selectively involve dominant traces. Therefore, memory stability versus change should depend on whether the currently dominant trace is old versus updated; in either case, novel information will be bound with it, causing either maintenance (when old) or change (when updated). People in our experiment studied objects at locations within scenes (contexts). During reactivation in a new context, subjects moved studied objects to new locations either via active location recall or by passively dragging objects to predetermined locations. After active reactivation, the new object location became dominant in memory, whereas after passive reactivation, the old object location maintained dominance. In both cases, hippocampal ANB bound the currently dominant object-location memory with a context with which it was not paired previously (i.e., associatively novel). Stability occurred in the passive condition when ANB united the dominant original location trace with an associatively novel newer context. Change occurred in the active condition when ANB united the dominant updated object location with an associatively novel and older context. Hippocampal ANB of the currently dominant trace with associatively novel contextual information thus provides a single mechanism to support memory stability and change, with shifts in trace dominance during reactivation dictating the outcome. |
Matthew W. Bridgman; Warren S. Brown; Michael L. Spezio; Matthew K. Leonard; Ralph Adolphs; Lynn K. Paul Facial emotion recognition in agenesis of the corpus callosum Journal Article In: Journal of Neurodevelopmental Disorders, vol. 6, pp. 1–14, 2014. @article{Bridgman2014, BACKGROUND: Impaired social functioning is a common symptom of individuals with developmental disruptions in callosal connectivity. Among these developmental conditions, agenesis of the corpus callosum provides the most extreme and clearly identifiable example of callosal disconnection. To date, deficits in nonliteral language comprehension, humor, theory of mind, and social reasoning have been documented in agenesis of the corpus callosum. Here, we examined a basic social ability as yet not investigated in this population: recognition of facial emotion and its association with social gaze. METHODS: Nine individuals with callosal agenesis and nine matched controls completed four tasks involving emotional faces: emotion recognition from upright and inverted faces, gender recognition, and passive viewing. Eye-tracking data were collected concurrently on all four tasks and analyzed according to designated facial regions of interest. RESULTS: Individuals with callosal agenesis exhibited impairments in recognizing emotions from upright faces, in particular lower accuracy for fear and anger, and these impairments were directly associated with diminished attention to the eye region. The callosal agenesis group exhibited greater consistency in emotion recognition across conditions (upright vs. inverted), with poorest performance for fear identification in both conditions. The callosal agenesis group also had atypical facial scanning (lower fractional dwell time in the eye region) during gender naming and passive viewing of faces, but they did not differ from controls on gender naming performance. The pattern of results did not differ when taking into account full-scale intelligence quotient or presence of autism spectrum symptoms. CONCLUSIONS: Agenesis of the corpus callosum results in a pattern of atypical facial scanning characterized by diminished attention to the eyes. This pattern suggests that reduced callosal connectivity may contribute to the development and maintenance of emotion processing deficits involving reduced attention to others' eyes. |
Allison E. Britt; Daniel Mirman; Sergey A. Kornilov; James S. Magnuson Effect of repetition proportion on language-driven anticipatory eye movements Journal Article In: Acta Psychologica, vol. 145, no. 1, pp. 128–138, 2014. @article{Britt2014, Previous masked priming research in word recognition has demonstrated that repetition priming is influenced by experiment-wise information structure, such as proportion of target repetition. Research using naturalistic tasks and eye-tracking has shown that people use linguistic knowledge to anticipate upcoming words. We examined whether the proportion of target repetition within an experiment can have a similar effect on anticipatory eye movements. We used a word-to-picture matching task (i.e., the visual world paradigm) with target repetition proportion carefully controlled. Participants' eye movements were tracked starting when the pictures appeared, one second prior to the onset of the target word. Targets repeated from the previous trial were fixated more than other items during this preview period when target repetition proportion was high and less than other items when target repetition proportion was low. These results indicate that linguistic anticipation can be driven by short-term within-experiment trial structure, with implications for the generalization of priming effects, the bases of anticipatory eye movements, and experiment design. |
Jon Brock; Kate Nation The hardest butter to button: Immediate context effects in spoken word identification Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 1, pp. 114–123, 2014. @article{Brock2014, According to some theories, the context in which a spoken word is heard has no impact on the earliest stages of word identification. This view has been challenged by recent studies indicating an interactive effect of context and acoustic similarity on language-mediated eye movements. However, an alternative explanation for these results is that participants looked less at acoustically similar objects in constraining contexts simply because they were looking more at other objects that were cued by the context. The current study addressed this concern whilst providing a much finer grained analysis of the temporal evolution of context effects. Thirty-two adults listened to sentences while viewing a computer display showing four objects. As expected, shortly after the onset of a target word (e.g., "button") in a neutral context, participants saccaded preferentially towards a cohort competitor of the word (e.g., butter). This effect was significantly reduced when the preceding verb made the competitor an unlikely referent (e.g., "Sam fastened the button"), even though there were no other contextually congruent objects in the display. Moreover, the time-course of these two effects was identical to within approximately 30 ms, indicating that certain forms of contextual information can have a near-immediate effect on word identification. |
Björn Browatzki; Heinrich H. Bülthoff; Lewis L. Chuang A comparison of geometric- and regression-based mobile gaze-tracking Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 200, 2014. @article{Browatzki2014, Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation. |
Florian Brugger; Michael Schüpbach; Michel Koenig; René M. Müri; Stephan Bohlhalter; Alain Kaelin-Lang; Christian P. Kamm; Georg Kägi The clinical spectrum of ataxia with oculomotor apraxia type 2 Journal Article In: Movement Disorders Clinical Practice, vol. 1, no. 2, pp. 106–109, 2014. @article{Brugger2014, Ataxia with oculomotor apraxia type 2 (AOA2) is an inherited disorder caused by mutations within both alleles of the senataxin gene. First symptoms are usually recognized before the age of 30. Unlike several other autosomal recessive cerebellar ataxia syndromes, levels of alpha-fetoprotein are nearly always elevated in AOA2 and thus narrowing down the differential diagnosis list. We present 3 video cases illustrating and expanding the clinical spectrum of AOA2, with 1 case bearing a novel mutation with cervical dystonia as the first symptom, the absence of neuropathy, and a disease onset beyond the age of 40. Furthermore, all patients were assessed by oculographic analysis, which revealed distinct patterns of oculomotor abnormalities. The clinical spectrum of AOA2 might be even broader than previously described in larger series. Oculography might be a useful tool to detect subclinical oculomotor apraxia in this disorder. |
Robert P. Burriss; Urszula M. Marcinkowska; Minna T. Lyons Gaze properties of women judging the attractiveness of masculine and feminine male faces Journal Article In: Evolutionary Psychology, vol. 12, no. 1, pp. 19–35, 2014. @article{Burriss2014, Most studies of female facial masculinity preference have relied upon self-reported preference, with participants selecting or rating the attractiveness of faces that differ in masculinity. However, researchers have not established a consensus as to whether women's general preference is for male faces that are masculine or feminine, and several studies have indicated that women prefer neither. We investigated women's preferences for male facial masculinity using standard two-alternative forced choice (2AFC) preference trials, paired with eye tracking measures, to determine whether conscious and non-conscious measures of preference yield similar results. We found that women expressed a preference for, gazed longer at, and fixated more frequently on feminized male faces. We also found effects of relationship status, relationship context (whether faced are judged for attractiveness as a long- or short-term partner), and hormonal contraceptive use. These results support previous findings that women express a preference for feminized over masculinized male faces, demonstrate that non-conscious measures of preference for this trait echo consciously expressed preferences, and suggest that certain aspects of the preference decision-making process may be better captured by eye tracking than by 2AFC preference trials. |
D. Brandon Burtis; Kenneth M. Heilman; Jue Mo; Chao Wang; Gregory F. Lewis; Maria I. Davilla; Mingzhou Ding; Stephen W. Porges; John B. Williamson The effects of constrained left versus right monocular viewing on the autonomic nervous system Journal Article In: Biological Psychology, vol. 100, no. 1, pp. 79–85, 2014. @article{Burtis2014, Asymmetrical activation of right and left hemispheres differentially influences the autonomic nervous system. Additionally, each hemisphere primarily receives retinocollicular projections from the contralateral eye. To learn if asymmetrical hemispheric activation induced by monocular viewing would influence relative pupillary size and respiratory hippus variability (RHV), a measure of parasympathetic activity, healthy participants had their left, right or neither eye patched. Pupillary sizes were then recorded with infrared pupillography. Pupillary dilation was significantly greater with left than right eye viewing. RHV, however, was not different between eye viewing conditions. These differences in pupil dilatation may have been caused by relatively greater activation of the right hemispheric-mediated sympathetic activity induced by left monocular viewing or relatively greater deactivation of the left hemispheric-mediated parasympathetic activity induced by right eye patching. The absence of an asymmetry in RHV, however, suggests that hemispheric asymmetry of sympathetic activation was primarily responsible for this ocular asymmetry of pupil dilation. |
Robyn Burton; Nicholas D. Smith; David P. Crabb Eye movements and reading in glaucoma: observations on patients with advanced visual field loss Journal Article In: Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 252, no. 10, pp. 1621–1630, 2014. @article{Burton2014, PURPOSE: To investigate the relationship between reading speed and eye movements in patients with advanced glaucomatous visual field (VF) defects and age-similar visually healthy people.$backslash$n$backslash$nMETHODS: Eighteen patients with advanced bilateral VF defects (mean age: 71, standard deviation [SD]: 7 years) and 39 controls (mean age: 67, SD: 8 years) had reading speed measured using short passages of text on a computer set-up incorporating eye tracking. Scanpaths were plotted and analysed from these experiments to derive measures of 'perceptual span' (total number of letters read per number of saccades) and 'text saturation' (the distance between the first and last fixation on lines of text). Another eye movement measure, termed 'saccadic frequency' (total number of saccades made to read a single word), was derived from a separate lexical decision task, where words were presented in isolation.$backslash$n$backslash$nRESULTS: Significant linear association was demonstrated between perceptual span and reading speed in patients (R (2) = 0.42) and controls (R (2) = 0.56). Linear association between saccadic frequency during the LDT and reading speed was also found in patients (R (2) = 0.42), but not in controls (R (2) = 0.02). Patients also exhibited greater average text saturation than controls (P = 0.004).$backslash$n$backslash$nCONCLUSION: Some, but not all, patients with advanced VF defects read slower than controls using short text passages. Differences in eye movement behaviour may partly account for this variability in patients. These patients were shown to saturate lines of text more during reading, which may explain previously-reported difficulties with sustained reading. |
Thomas Busigny; Goedele Van Belle; Boutheina Jemel; Anthony Hosein; Sven Joubert; Bruno Rossion Face-specific impairment in holistic perception following focal lesion of the right anterior temporal lobe Journal Article In: Neuropsychologia, vol. 56, no. 1, pp. 312–333, 2014. @article{Busigny2014, Recent studies have provided solid evidence for pure cases of prosopagnosia following brain damage. The patients reported so far have posterior lesions encompassing either or both the right inferior occipital cortex and fusiform gyrus, and exhibit a critical impairment in generating a sufficiently detailed holistic percept to individualize faces. Here, we extended these observations to include the prosopagnosic patient LR (Bukach, Bub, Gauthier, & Tarr, 2006), whose damage is restricted to the anterior region of the right temporal lobe. First, we report that LR is able to discriminate parametrically defined individual exemplars of nonface object categories as accurately and quickly as typical observers, which suggests that the visual similarity account of prosopagnosia does not explain his impairments. Then, we show that LR does not present with the typical face inversion effect, whole-part advantage, or composite face effect and, therefore, has impaired holistic perception of individual faces. Moreover, the patient is more impaired at matching faces when the facial part he fixates is masked than when it is selectively revealed by means of gaze contingency. Altogether these observations support the view that the nature of the critical face impairment does not differ qualitatively across patients with acquired prosopagnosia, regardless of the localization of brain damage: all these patients appear to be impaired to some extent at what constitutes the heart of our visual expertise with faces, namely holistic perception at a sufficiently fine-grained level of resolution to discriminate exemplars of the face class efficiently. This conclusion raises issues regarding the existing criteria for diagnosis/classification of patients as cases of apperceptive or associative prosopagnosia. |
Charles F. Cadieu; Ha Hong; Daniel L. K. Yamins; Nicolas Pinto; Diego Ardila; Ethan A. Solomon; Najib J. Majaj; James J. DiCarlo Deep neural networks rival the representation of primate IT cortex for core visual object recognition Journal Article In: PLoS Computational Biology, vol. 10, no. 12, pp. e1003963, 2014. @article{Cadieu2014, The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of ‘‘kernel analysis'' that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio- inspired models, that possibility cannot be ruled out merely on representational performance grounds. |
Xinying Cai; Camillo Padoa-Schioppa Contributions of orbitofrontal and lateral prefrontal cortices to economic choice and the good-to-action transformation Journal Article In: Neuron, vol. 81, no. 5, pp. 1140–1151, 2014. @article{Cai2014, Previous work indicates that economic decisions can be made independently of the visuomotor contingencies of the choice task (space of goods). However, the neuronal mechanisms through which the choice outcome (the chosen good) is transformed into a suitable action plan remain poorly understood. Here we show that neurons in lateral prefrontal cortex reflect the early stages of this good-to-action transformation. Monkeys chose between different juices. The experimental design dissociated in space and time the presentation of the offers and the saccade targets associated with them. We recorded from the orbital, ventrolateral, and dorsolateral prefrontal cortices (OFC, LPFCv, and LPFCd, respectively). Prior to target presentation, neurons in both LPFCv and LPFCd encoded the choice outcome in goods space. After target presentation, they gradually came to encode the location of the targets and the upcoming action plan. Consistent with the anatomical connectivity, all spatial and action-related signals emerged in LPFCv before LPFCd. |
Aurélie Calabrèse; Jean-Baptiste Bernard; Géraldine Faure; Louis Hoffart; Eric Castet Eye movements and reading speed in macular disease: The shrinking perceptual span hypothesis requires and is supported by a mediation analysis Journal Article In: Investigative Ophthalmology & Visual Science, vol. 55, no. 6, pp. 3638–3645, 2014. @article{Calabrese2014, Purpose. Reading speed of patients with Central Field Loss (CFL) correlates with the size of saccades (measured in letters per forward saccade - L/FS). We assessed whether this effect is mediated by the total number of fixations, by the average fixation duration, or by a mixture of both. Methods. We measured eye movements (with a video eyetracker) of 35 AMD and 4 Stargardt patients (better eye decimal acuity from 0.08 to 0.3) while they monocularly read single-line French sentences continuously displayed on a screen. All patients had a dense scotoma covering the fovea, as assessed with MP1 microperimetry, and therefore used eccentric viewing. Results were analyzed with regression-based mediation analysis, a modeling framework that informs on the underlying factors by which an independent variable affects a dependent variable. Results. Reading speed and average fixation duration are negatively correlated, a result that was not observed in prior studies with CFL patients. This effect of fixation duration on reading speed is still significant when partialling out the effect of the total number of fixations (slope:-0.75, p<0.001). Despite this large effect of fixation duration, mediation analysis shows that the effect of L/FS on reading speed is fully mediated by the total number of fixations (effect size: 0.96; CI[0.82, 1.12]) and not by fixation duration (effect size: 0.02; CI[-0.11,0.14]). Conclusions. Results are consistent with the shrinking perceptual span hypothesis: reading speed decreases with the average number of letters traversed on each forward saccade, an effect fully mediated by the total number of fixations. |
R. Calen Walshe; Antje Nuthmann Asymmetrical control of fixation durations in scene viewing Journal Article In: Vision Research, vol. 100, pp. 38–46, 2014. @article{CalenWalshe2014, In two experiments we investigated the control of fixation durations in naturalistic scene viewing. Empirical evidence from the scene onset delay paradigm and numerical simulations of such data with the CRISP model [Psychological Review 117 (2010) 382-405] have suggested that processing related difficulties may lead to prolonged fixation durations. Here, we ask whether processing related facilitation may lead to comparable decreases to fixation durations. Research in visual search and reading have reported only uni-directional shifts. To address the question of unidirectional (slow down) as opposed to bidirectional (slow down and speed up) adjustment of fixation durations in the context of scene viewing, we used a saccade-contingent display change method to either reduce or increase the luminance of the scene during prespecified critical fixations. Degrading the stimulus by shifting luminance down resulted in an immediate increase to fixation durations. However, clarifying the stimulus by shifting luminance upwards did not result in a comparable decrease to fixation durations. These results suggest that the control of fixation durations in scene viewing is asymmetric, as has been reported for visual search and reading. |
Estela Camara; Lluis Fuentemilla Accessing forgotten memory traces from long-term memory via visual movements Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 930, 2014. @article{Camara2014, Because memory retrieval often requires overt responses, it is difficult to determine to what extend forgetting occurs as a problem in explicit accessing of long-term memory traces. In this study, we used eye-tracking measures in combination with a behavioral task that favored high forgetting rates to investigate the existence of memory traces from long-term memory in spite of failure in accessing them consciously. In two experiments, participants were encouraged to encode a large set of sound-picture-location associations. In a later test, sounds were presented and participants were instructed to visually scan, before a verbal memory report, for the correct location of the associated pictures in an empty screen. We found the reactivation of associated memories by sound cues at test biased oculomotor behavior towards locations congruent with memory representations, even when participants failed to consciously provide a memory report of it. These findings reveal the emergence of a memory-guided behavior that can be used to map internal representations of forgotten memories from long-term memory. |
Nathan Caruana; Jon Brock No association between autistic traits and contextual influences on eye-movements during reading Journal Article In: PeerJ, vol. 2, pp. 1–16, 2014. @article{Caruana2014, Individuals with autism spectrum disorders are claimed to show a local cognitive bias, termed " weak central coherence " , which manifests in a reduced influence of con-textual information on linguistic processing. Here, we investigated whether this bias might also be demonstrated by individuals who exhibit sub-clinical levels of autistic traits, as has been found for other aspects of autistic cognition. The eye-movements of 71 university students were monitored as they completed a reading comprehension task. Consistent with previous studies, participants made shorter fixations on words that were highly predicted on the basis of preceding sentence context. However, contrary to the weak central coherence account, this effect was not reduced amongst individuals with high levels of autistic traits, as measured by the Autism Spectrum Quotient (AQ). Further exploratory analyses revealed that participants with high AQ scores fixated longer on words that resolved the meaning of an earlier homograph. However, this was only the case for sentences where the two potential meanings of the homograph result in different pronunciations. The results provide tentative evidence for differences in reading style that are associated with autistic traits, but fail to support the notion of weak central coherence extending into the non-autistic population. |
Marta Castellano; Michael Plöchl; Raul Vicente; Gordon Pipa Neuronal oscillations form parietal/frontal networks during contour integration Journal Article In: Frontiers in Integrative Neuroscience, vol. 8, pp. 64, 2014. @article{Castellano2014, The ability to integrate visual features into a global coherent percept that can be further categorized and manipulated are fundamental abilities of the neural system. While the processing of visual information involves activation of early visual cortices, the recruitment of parietal and frontal cortices has been shown to be crucial for perceptual processes. Yet is it not clear how both cortical and long-range oscillatory activity leads to the integration of visual features into a coherent percept. Here, we will investigate perceptual grouping through the analysis of a contour categorization task, where the local elements that form contour must be linked into a coherent structure, which is then further processed and manipulated to perform the categorization task. The contour formation in our visual stimulus is a dynamic process where, for the first time, visual perception of contours is disentangled from the onset of visual stimulation or from motor preparation, cognitive processes that until now have been behaviorally attached to perceptual processes. Our main finding is that, while local and long-range synchronization at several frequencies seem to be an ongoing phenomena, categorization of a contour could only be predicted through local oscillatory activity within parietal/frontal sources, which in turn, would synchronize at gamma (>30 Hz) frequency. Simultaneously, fronto-parietal beta (13-30 Hz) phase locking forms a network spanning across neural sources that are not category specific. Both long range networks, i.e., the gamma network that is category specific, and the beta network that is not category specific, are functionally distinct but spatially overlapping. Altogether, we show that a critical mechanism underlying contour categorization involves oscillatory activity within parietal/frontal cortices, as well as its synchronization across distal cortical sites. |
Mark S. Bolding; Adrienne C. Lahti; David White; Claire Moore; Demet Gurler; Timothy J. Gawne; Paul D. Gamlin Vergence eye movements in patients with schizophrenia Journal Article In: Vision Research, vol. 102, pp. 64–70, 2014. @article{Bolding2014, Previous studies have shown that smooth pursuit eye movements are impaired in patients with schizophrenia. However, under normal viewing conditions, targets move not only in the frontoparallel plane but also in depth, and tracking them requires both smooth pursuit and vergence eye movements. Although previous studies in humans and non-human primates suggest that these two eye movement subsystems are relatively independent of one another, to our knowledge, there have been no prior studies of vergence tracking behavior in patients with schizophrenia. Therefore, we have investigated these eye movements in patients with schizophrenia and in healthy controls. We found that patients with schizophrenia exhibited substantially lower gains compared to healthy controls during vergence tracking at all tested speeds (e.g. 0.25. Hz vergence tracking mean gain of 0.59 vs. 0.86). Further, consistent with previous reports, patients with schizophrenia exhibited significantly lower gains than healthy controls during smooth pursuit at higher target speeds (e.g. 0.5. Hz smooth pursuit mean gain of 0.64 vs. 0.73). In addition, there was a modest (r≈0.5), but significant, correlation between smooth pursuit and vergence tracking performance in patients with schizophrenia. Our observations clearly demonstrate substantial vergence tracking deficits in patients with schizophrenia. In these patients, deficits for smooth pursuit and vergence tracking are partially correlated suggesting overlap in the central control of smooth pursuit and vergence eye movements. © 2014 Elsevier Ltd. |
Sabrina Boll; Matthias Gamer 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions Journal Article In: Frontiers in Behavioral Neuroscience, vol. 8, pp. 255, 2014. @article{Boll2014, Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR) have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes toward diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n = 39) and a high (n = 40) 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes toward the eyes and toward the mouth could be identified. We found that the low vs. the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation toward diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features. |
Paul J. Boon; Jan Theeuwes; Artem V. Belopolsky Updating visual-spatial working memory during object movement Journal Article In: Vision Research, vol. 94, pp. 51–57, 2014. @article{Boon2014, Working memory enables temporary maintenance and manipulation of information for immediate access by cognitive processes. The present study investigates how spatial information stored in working memory is updated during object movement. Participants had to remember a particular location on an object which, after a retention interval, started to move. The question was whether the memorized location was updated with the movement of the object or whether after object movement it remained represented in retinotopic coordinates. We used saccade trajectories to examine how memorized locations were represented. The results showed that immediately after the object stopped moving, there was both a retinotopic and an object-centered representation. However, 200. ms later, the activity at the retinotopic location decayed, making the memory representation fully object-centered. Our results suggest that memorized locations are updated from retinotopic to object-centered coordinates during, or shortly after object movement. |
Ali Borji; Laurent Itti Defending Yarbus: Eye movements reveal observers' task Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–22, 2014. @article{Borji2014, In a very influential yet anecdotal illustration, Yarbus suggested that human eye-movement patterns are modulated top down by different task demands. While the hypothesis that it is possible to decode the observer's task from eye movements has received some support (e.g., Henderson, Shinkareva, Wang, Luke, & Olejarczyk, 2013; Iqbal & Bailey, 2004), Greene, Liu, and Wolfe (2012) argued against it by reporting a failure. In this study, we perform a more systematic investigation of this problem, probing a larger number of experimental factors than previously. Our main goal is to determine the informativeness of eye movements for task and mental state decoding. We perform two experiments. In the first experiment, we reanalyze the data from a previous study by Greene et al. (2012) and contrary to their conclusion, we report that it is possible to decode the observer's task from aggregate eye-movement features slightly but significantly above chance, using a Boosting classifier (34.12% correct vs. 25% chance level; binomial test, p ¼ 1.0722e – 04). In the second experiment, we repeat and extend Yarbus's original experiment by collecting eye movements of 21 observers viewing 15 natural scenes (including Yarbus's scene) under Yarbus's seven questions. We show that task decoding is possible, also moderately but significantly above chance (24.21% vs. 14.29% chance- level; binomial test, p ¼ 2.4535e – 06). We thus conclude that Yarbus's idea is supported by our data and continues to be an inspiration for future computational and experimental eye-movement research. From a broader perspective, we discuss techniques, features, limitations, societal and technological impacts, and future directions in task decoding from eye movements. |
Ali Borji; Daniel Parks; Laurent Itti Complementary effects of gaze direction and early saliency in guiding fixations during free viewing Journal Article In: Journal of Vision, vol. 14, no. 13, pp. 1–32, 2014. @article{Borji2014a, Gaze direction provides an important and ubiquitous communication channel in daily behavior and social interaction of humans and some animals. While several studies have addressed gaze direction in synthesized simple scenes, few have examined how it can bias observer attention and how it might interact with early saliency during free viewing of natural and realistic scenes. Experiment 1 used a controlled, staged setting in which an actor was asked to look at two different objects in turn, yielding two images that differed only by the actor's gaze direction, to causally assess the effects of actor gaze direction. Over all scenes, the median probability of following an actor's gaze direction was higher than the median probability of looking toward the single most salient location, and higher than chance. Experiment 2 confirmed these findings over a larger set of unconstrained scenes collected from the Web and containing people looking at objects and/or other people. To further compare the strength of saliency versus gaze direction cues, we computed gaze maps by drawing a cone in the direction of gaze of the actors present in the images. Gaze maps predicted observers' fixation locations significantly above chance, although below saliency. Finally, to gauge the relative importance of actor face and eye directions in guiding observer's fixations, in Experiment 3, observers were asked to guess the gaze direction from only an actor's face region (with the rest of the scene masked), in two conditions: actor eyes visible or masked. Median probability of guessing the true gaze direction within ±9° was significantly higher when eyes were visible, suggesting that the eyes contribute significantly to gaze estimation, in addition to face region. Our results highlight that gaze direction is a strong attentional cue in guiding eye movements, complementing low-level saliency cues, and derived from both face and eyes of actors in the scene. Thus gaze direction should be considered in constructing more predictive visual attention models in the future. |
Tobias Bormann; Sascha A. Wolfer; Wibke Hachmann; Wolf A. Lagrèze; Lars Konieczny An eye movement study on the role of the visual field defect in pure alexia Journal Article In: PLoS ONE, vol. 9, no. 7, pp. e100898, 2014. @article{Bormann2014, Pure alexia is a severe impairment of word reading which is usually accompanied by a right-sided visual field defect. Patients with pure alexia exhibit better preserved writing and a considerable word length effect, claimed to result from a serial letter processing strategy. Two experiments compared the eye movements of four patients with pure alexia to controls with simulated visual field defects (sVFD) when reading single words. Besides differences in response times and differential effects of word length on word reading in both groups, fixation durations and the occurrence of a serial, letter-by-letter fixation strategy were investigated. The analyses revealed quantitative and qualitative differences between pure alexic patients and unimpaired individuals reading with sVFD. The patients with pure alexia read words slower and exhibited more fixations. The serial, letter-by-letter fixation strategy was observed only in the patients but not in the controls with sVFD. It is argued that the VFD does not cause pure alexic reading. |
Sabine Born; Isaline Mottet; Dirk Kerzel Presaccadic perceptual facilitation effects depend on saccade execution: Evidence from the stop-signal paradigm Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–10, 2014. @article{Born2014, Prior to the onset of a saccadic eye movement, perception is facilitated at the saccade target location. This has been attributed to a shift of attention. To test whether presaccadic attention shifts are strictly dependent on saccade execution, we examined whether they are found when observers are required to cancel the eye movement. We combined a dual task with the stop-signal paradigm: Subjects made saccades as quickly as possible to a cued location while discriminating a stimulus either at the saccade target or at the opposite location. A stop signal was presented on a subset of trials, asking subjects to cancel the eye movement. The delay of the stop signal was adjusted to yield successful inhibition of the saccade in 50% of trials. Results show similar perceptual facilitation at the saccade target for saccades with or without a stop signal, suggesting that presaccadic attention shifts are obligatory for all saccades. However, there was facilitation only when saccades were actually performed, not when observers successfully inhibited them. Thus, preparing an eye movement without subsequently executing it does not result in an attention shift. The results speak to a difference between saccade preparation and saccade programming. In light of the strong dependence on saccade execution, we discuss the functional role and causes of presaccadic attention shifts. |
Arielle Borovsky; Sarah C. Creel Children and adults integrate talker and verb information in online processing Journal Article In: Developmental Psychology, vol. 50, no. 5, pp. 1600–1613, 2014. @article{Borovsky2014, Children seem able to efficiently interpret a variety of linguistic cues during speech comprehension, yet have difficulty interpreting sources of nonlinguistic and paralinguistic information that accompany speech. The current study asked whether (paralinguistic) voice-activated role knowledge is rapidly interpreted in coordination with a linguistic cue (a sentential action) during speech comprehension in an eye-tracked sentence comprehension task with children (ages 3-10 years) and college-aged adults. Participants were initially familiarized with 2 talkers who identified their respective roles (e.g., PRINCESS and PIRATE) before hearing a previously introduced talker name an action and object ("I want to hold the sword," in the pirate's voice). As the sentence was spoken, eye movements were recorded to 4 objects that varied in relationship to the sentential talker and action (target: SWORD, talker-related: SHIP, action-related: WAND, and unrelated: CARRIAGE). The task was to select the named image. Even young child listeners rapidly combined inferences about talker identity with the action, allowing them to fixate on the target before it was mentioned, although there were developmental and vocabulary differences on this task. Results suggest that children, like adults, store real-world knowledge of a talker's role and actively use this information to interpret speech. |
Arielle Borovsky; Kim Sweeney; Jeffrey L. Elman; Anne Fernald Real-time interpretation of novel events across childhood Journal Article In: Journal of Memory and Language, vol. 73, no. 1, pp. 1–14, 2014. @article{Borovsky2014a, Despite extensive evidence that adults and children rapidly integrate world knowledge to generate expectancies for upcoming language, little work has explored how this knowledge is initially acquired and used. We explore this question in 3- to 10-year-old children and adults by measuring the degree to which sentences depicting recently learned connections between agents, actions and objects lead to anticipatory eye-movements to the objects. Combinatory information in sentences about agent and action elicited anticipatory eye-movements to the Target object in adults and older children. Our findings suggest that adults and school-aged children can quickly activate information about recently exposed novel event relationships in real-time language processing. However, there were important developmental differences in the use of this knowledge. Adults and school-aged children used the sentential agent and action to predict the sentence final theme, while preschool children's fixations reflected a simple association to the currently spoken item. We consider several reasons for this developmental difference and possible extensions of this paradigm. |
Hans Rutger Bosker; Hugo Quené; Ted J. M. Sanders; Nivja H. Jong Native 'um's elicit prediction of low-frequency referents, but non-native 'um's do not Journal Article In: Journal of Memory and Language, vol. 75, pp. 104–116, 2014. @article{Bosker2014, Speech comprehension involves extensive use of prediction. Linguistic prediction may be guided by the semantics or syntax, but also by the performance characteristics of the speech signal, such as disfluency. Previous studies have shown that listeners, when presented with the filler uh, exhibit a disfluency bias for discourse-new or unknown referents, drawing inferences about the source of the disfluency. The goal of the present study is to study the contrast between native and non-native disfluencies in speech comprehension. Experiment 1 presented listeners with pictures of high-frequency (e.g., a hand) and low-frequency objects (e.g., a sewing machine) and with fluent and disfluent instructions. Listeners were found to anticipate reference to low-frequency objects when encountering disfluency, thus attributing disfluency to speaker trouble in lexical retrieval. Experiment 2 showed that, when participants listened to disfluent non-native speech, no anticipation of low-frequency referents was observed. We conclude that listeners can adapt their predictive strategies to the (non-native) speaker at hand, extending our understanding of the role of speaker identity in speech comprehension. |
Marie Line Bosse; Sonia Kandel; Chloé Prado; Sylviane Valdois Does visual attention span relate to eye movements during reading and copying? Journal Article In: International Journal of Behavioral Development, vol. 38, no. 1, pp. 81–85, 2014. @article{Bosse2014, This research investigated whether text reading and copying involve visual attention-processing skills. Children in grades 3 and 5 read and copied the same text. We measured eye movements while reading and the number of gaze lifts (GL) during copying. The children were also administered letter report tasks that constitute an estimation of the number of letters that are processed simultaneously. The tasks were designed to assess visual attention span abilities (VA). The results for both grades revealed that the children who reported more letters, i.e., processed more consonants in parallel, produced fewer rightward fixations during text reading suggesting they could process more letters at each fixation. They also copied more letters per gaze lift from the same text. Furthermore, a regression analysis showed that VA span predicted variations in copying independently of the influence of reading skills. The findings support a role of VA span abilities in the early extraction of orthographic information, for both reading and copying tasks. |
James F. Cavanagh; Thomas V. Wiecki; Angad Kochar; Michael J. Frank Eye tracking and pupillometry are indicators of dissociable latent decision processes Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 4, pp. 1476–1488, 2014. @article{Cavanagh2014, Can you predict what people are going to do just by watching them? This is certainly difficult: it would require a clear mapping between observable indicators and unobservable cognitive states. In this report, we demonstrate how this is possible by monitoring eye gaze and pupil dilation, which predict dissociable biases during decision making. We quantified decision making using the drift diffusion model (DDM), which provides an algorithmic account of how evidence accumulation and response caution contribute to decisions through separate latent parameters of drift rate and decision threshold, respectively. We used a hierarchical Bayesian estimation approach to assess the single trial influence of observable physiological signals on these latent DDM parameters. Increased eye gaze dwell time specifically predicted an increased drift rate toward the fixated option, irrespective of the value of the option. In contrast, greater pupil dilation specifically predicted an increase in decision threshold during difficult decisions. These findings suggest that eye tracking and pupillometry reflect the operations of dissociated latent decision processes. |
Dario Cazzoli; Chrystalina A. Antoniades; Christopher Kennard; Thomas Nyffeler; Claudio L. Bassetti; René M. Müri Eye movements discriminate fatigue due to chronotypical factors and time spent on task - A double dissociation Journal Article In: PLoS ONE, vol. 9, no. 1, pp. e87146, 2014. @article{Cazzoli2014, Systematic differences in circadian rhythmicity are thought to be a substantial factor determining inter-individual differences in fatigue and cognitive performance. The synchronicity effect (when time of testing coincides with the respective circadian peak period) seems to play an important role. Eye movements have been shown to be a reliable indicator of fatigue due to sleep deprivation or time spent on cognitive tasks. However, eye movements have not been used so far to investigate the circadian synchronicity effect and the resulting differences in fatigue. The aim of the present study was to assess how different oculomotor parameters in a free visual exploration task are influenced by: a) fatigue due to chronotypical factors (being a 'morning type' or an 'evening type'); b) fatigue due to the time spent on task. Eighteen healthy participants performed a free visual exploration task of naturalistic pictures while their eye movements were recorded. The task was performed twice, once at their optimal and once at their non-optimal time of the day. Moreover, participants rated their subjective fatigue. The non-optimal time of the day triggered a significant and stable increase in the mean visual fixation duration during the free visual exploration task for both chronotypes. The increase in the mean visual fixation duration correlated with the difference in subjectively perceived fatigue at optimal and non-optimal times of the day. Conversely, the mean saccadic speed significantly and progressively decreased throughout the duration of the task, but was not influenced by the optimal or non-optimal time of the day for both chronotypes. The results suggest that different oculomotor parameters are discriminative for fatigue due to different sources. A decrease in saccadic speed seems to reflect fatigue due to time spent on task, whereas an increase in mean fixation duration a lack of synchronicity between chronotype and time of the day. |
Myriam Chanceaux; Jonathan Grainger Effects of number, complexity, and familiarity of flankers on crowded letter identification Journal Article In: Journal of Vision, vol. 14, no. 2014, pp. 1–17, 2014. @article{Chanceaux2014, We tested identification of target letters surrounded by a varying number (2, 4, 6) of horizontally aligned flanking elements. Strings were presented left or right of a central fixation dot, and targets were always at the center of the string. Flankers could be other letters, digits, symbols, simple shapes, or false fonts, and thus varied both in terms of visual complexity and familiarity. Two-alternative forced choice (2AFC) speed and accuracy was measured for choosing the target letter versus an alternative letter that was not present in the string. Letter identification became harder as the number of flankers increased. Greater flanker complexity led to more interference in target identification, whereas more complex targets were easier to identify. Effects of flanker complexity were found to depend on visual field and position of flankers, with the strongest effects seen for leftward flankers in the left visual field. Visual complexity predicted flanker interference better than familiarity, and better than target-flanker similarity. These results provide further support for an excessive feature- integration account of the interfering effects of both adjacent and nonadjacent flanking elements in horizontally aligned strings. |
Myriam Chanceaux; Anne Guérin-Dugué; Benoît Lemaire; Thierry Baccino A computational cognitive model of information search in textual materials Journal Article In: Cognitive Computation, vol. 6, no. 1, pp. 1–17, 2014. @article{Chanceaux2014a, Document foraging for information is a crucial and increasingly prevalent activity nowadays. We designed a computational cognitive model to simulate the oculomotor scanpath of an average web user searching for specific information from textual materials. In particular, the developed model dynamically combines visual, semantic, and memory processes to predict the user's focus of attention during information seeking from paragraphs of text. A series of psychological experiments was conducted using eye-tracking techniques in order to validate and refine the proposed model. Comparisons between model simulations and human data are reported and discussed taking into account the strengths and shortcomings of the model. The proposed model provides a unique contribution to the investigation of the cognitive processes involved during information search and bears significant implications for web page design and evaluation. |
Samuel G. Charlton; Nicola J. Starkey; John A. Perrone; Robert B. Isler What's the risk? A comparison of actual and perceived driving risk Journal Article In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 25, no. A, pp. 50–64, 2014. @article{Charlton2014, It has long been presumed that drivers' perceptions of risk play an important role in guiding on-road behaviour. The answer to how accurately drivers perceive the momentary risk of a driving situation, however, is unknown. This research compared drivers' perceptions of the momentary risk for a range of roads to the objective risk associated with those roads. Videos of rural roads, filmed from the drivers' perspective, were presented to 69 participants seated in a driving simulator while they indicated the momentary levels of risk they were experiencing by moving a risk meter mounted on the steering wheel. Estimates of the objective levels of risk for the roads were calculated using road protection scores from the KiwiRAP database (part of the International Road Assessment Programme). Subsequently, the participants also provided risk estimates for still photos taken from the videos. Another group of 10 participants viewed the videos and photos while their eye movements and fixations were recorded. In a third experiment, 14 participants drove a subset of the roads in a car while providing risk ratings at selected points of interest. Results showed a high degree of consistency across the different methods. Certain road situations were rated as being riskier than the objective risk, and perhaps more importantly, the risk of other situations was significantly under-rated. Horizontal curves and narrow lanes were associated with over-rated risk estimates, while intersections and roadside hazards such as narrow road shoulders, power poles and ditches were significantly under-rated. Analysis of eye movements indicated that drivers did not fixate these features and that the spread of fixations, pupil size and eye blinks were significantly correlated with the risk ratings. An analysis of the road design elements at 77 locations in the video revealed five road characteristics that predicted nearly 80% of the variance in drivers' risk perceptions; horizontal curvature, lane and shoulder width, gradient, and the presence of median barriers. |
Samuel W. Cheadle; Semir Zeki The role of parietal cortex in the formation of color and motion based concepts Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 535, 2014. @article{Cheadle2014, Imaging evidence shows that separate subdivisions of parietal cortex, in and around the intraparietal sulcus (IPS), are engaged when stimuli are grouped according to color and to motion (Zeki and Stutters, 2013). Since grouping is an essential step in the formation of concepts, we wanted to learn whether parietal cortex is also engaged in the formation of concepts according to these two attributes. Using functional magnetic resonance imaging (fMRI), and choosing the recognition of concept-based color or motion stimuli as our paradigm, we found that there was strong concept-related activity in and around the IPS, a region whose homolog in the macaque monkey is known to receive direct but segregated anatomical inputs from V4 and V5. Parietal activity related to color concepts was juxtaposed but did not overlap with activity related to motion concepts, thus emphasizing the continuation of the segregation of color and motion into the conceptual system. Concurrent retinotopic mapping experiments showed that within the parietal cortex, concept-related activity increases within later stage IPS areas. |
Samuel Cheadle; Valentin Wyart; Konstantinos Tsetsos; Nicholas E. Myers; Vincent DeGardelle; Santiago Herce Castañón; Christopher Summerfield Adaptive gain control during human perceptual choice Journal Article In: Neuron, vol. 81, no. 6, pp. 1429–1441, 2014. @article{Cheadle2014a, Neural systems adapt to background levels of stimulation. Adaptive gain control has been extensively studied in sensory systems but overlooked in decision-theoretic models. Here, we describe evidence for adaptive gain control during the serial integration of decision-relevant information. Human observers judged the average information provided by a rapid stream of visual events (samples). The impact that each sample wielded over choices depended on its consistency with the previous sample, with more consistent or expected samples wielding the greatest influence over choice. This bias was also visible in the encoding of decision information in pupillometric signals and in cortical responses measured with functional neuroimaging. These data can be accounted for with a serial sampling model in which the gain of information processing adapts rapidly to reflect the average of the available evidence. |
Jiaqing Chen; Matthias Niemeier Do head-on-trunk signals modulate disengagement of spatial attention? Journal Article In: Experimental Brain Research, vol. 232, no. 1, pp. 147–157, 2014. @article{Chen2014, Body schema is indispensable for sensorimotor control and learning, but whether it is associated with cognitive functions, such as allocation of spatial attention, remains unclear. Observations in patients with unilateral spatial neglect support this view, yet data from neurologically normal participants are inconsistent. Here, we investigated the influence of head-on-trunk positions (30° left or right, straight ahead) on disengagement of attention in healthy participants. Five experiments examined the effects of valid or invalid cues on spatial shifts of attention using the Posner paradigm. Experiment 1 used a forced-choice task. Participants quickly reported the location of a target that appeared left or right of the fixation point, preceded by a cue on the same (valid) or opposite side (invalid). Experiments 2, 3, and 4 also used valid and invalid cues but required participants to simply detect a target appearing on the left or right side. Experiment 5 used a speeded discrimination task, in which participants quickly reported the orientation of a Gabor. We observed expected influences of validity and stimulus onset asynchrony as well as inhibition of return; however, none of the experiments suggested that head-on-trunk position created or changed visual field advantages, contrary to earlier reports. Our results showed that the manipulations of the body schema did not modulate attentional processes in the healthy brain, unlike neuropsychological studies on neglect patients. Our findings suggest that spatial neglect reflects a state of the lesioned brain that is importantly different from that of the normally functioning brain. |
Nigel T. M. Chen; Patrick J. F. Clarke; Tamara L. Watson; Colin MacLeod; Adam J. Guastella Biased saccadic responses to emotional stimuli in anxiety: An antisaccade study Journal Article In: PLoS ONE, vol. 9, no. 2, pp. e86474, 2014. @article{Chen2014b, Research suggests that anxiety is maintained by an attentional bias to threat, and a growing base of evidence suggests that anxiety may additionally be associated with the deficient attentional processing of positive stimuli. The present study sought to examine whether such anxiety-linked attentional biases were associated with either stimulus driven or attentional control mechanisms of attentional selectivity. High and low trait anxious participants completed an emotional variant of an antisaccade task, in which they were required to prosaccade towards, or antisaccade away from a positive, neutral or threat stimulus, while eye movements were recorded. While low anxious participants were found to be slower to saccade in response to positive stimuli, irrespectively of whether a pro- or antisaccade was required, such a bias was absent in high anxious individuals. Analysis of erroneous antisaccades further revealed at trend level, that anxiety was associated with reduced peak velocity in response to threat. The findings suggest that anxiety is associated with the aberrant processing of positive stimuli, and greater compensatory efforts in the inhibition of threat. The findings further highlight the relevance of considering saccade peak velocity in the assessment of anxiety-linked attentional processing. |
Sheng-Chang Chen; Hsiao-Ching She; Ming-Hua Chuang; Jiun-Yu Wu; Jie-Li Tsai; Tzyy-Ping Jung Eye movements predict students' computer-based assessment performance of physics concepts in different presentation modalities Journal Article In: Computers & Education, vol. 74, pp. 61–72, 2014. @article{Chen2014a, Despite decades of studies on the link between eye movements and human cognitive processes, the exact nature of the link between eye movements and computer-based assessment performance still remains unknown. To bridge this gap, the present study investigates whether human eye movement dynamics can predict computer-based assessment performance (accuracy of response) in different presentation modalities (picture vs. text). Eye-tracking system was employed to collect 63 college students' eye movement behaviors while they are engaging in the computer-based physics concept questions presented as either pictures or text. Students' responses were collected immediately after the picture or text presentations in order to determine the accuracy of responses. The results demonstrated that students' eye movement behavior can successfully predict their computer-based assessment performance. Remarkably, the mean fixation duration has the greatest power to predict the likelihood of responding the correct physics concepts successfully, followed by re-reading time in proportion. Additionally, the mean saccade distance has the least and negative power to predict the likelihood of responding the physics concepts correctly in the picture presentation. Interestingly, pictorial presentations appear to convey physics concepts more quickly and efficiently than do textual presentations. This study adds empirical evidence of a prediction model between eye movement behaviors and successful cognitive performance. Moreover, it provides insight into the modality effects on students' computer-based assessment performance through the use of eye movement behavior evidence. |
Xiaorong Cheng; Qi Yang; Yaqian Han; Xianfeng Ding; Zhao Fan Capacity limit of simultaneous temporal processing: How many concurrent 'clocks' in vision? Journal Article In: PLoS ONE, vol. 9, no. 3, pp. e91797, 2014. @article{Cheng2014, A fundamental ability for humans is to monitor and process multiple temporal events that occur at different spatial locations simultaneously. A great number of studies have demonstrated simultaneous temporal processing (STP) in human and animal participants, i.e., multiple 'clocks' rather than a single 'clock'. However, to date, we still have no knowledge about the exact limitation of the STP in vision. Here we provide the first experimental measurement to this critical parameter in human vision by using two novel and complementary paradigms. The first paradigm combines merits of a temporal oddball-detection task and a capacity measurement widely used in the studies of visual working memory to quantify the capacity of STP (CSTP). The second paradigm uses a two-interval temporal comparison task with various encoded spatial locations involved in the standard temporal intervals to rule out an alternative, 'object individuation'-based, account of CSTP, which is measured by the first paradigm. Our results of both paradigms indicate consistently that the capacity limit of simultaneous temporal processing in vision is around 3 to 4 spatial locations. Moreover, the binding of the 'local clock' and its specific location is undermined by bottom-up competition of spatial attention, indicating that the time-space binding is resource-consuming. Our finding that the capacity of STP is not constrained by the capacity of visual working memory (VWM) supports the idea that the representations of STP are likely stored and operated in units different from those of VWM. A second paradigm confirms further that the limited number of location-bound 'local clocks' are activated and maintained during a time window of several hundreds milliseconds. |
Kimberly S. Chiew; Todd S. Braver Dissociable influences of reward motivation and positive emotion on cognitive control Journal Article In: Cognitive, Affective, & Behavioral Neuroscience, vol. 14, no. 2, pp. 509–529, 2014. @article{Chiew2014, It is becoming increasingly appreciated that affective and/or motivational influences contribute strongly to goal-oriented cognition and behavior. An unresolved question is whether emotional manipulations (i.e., direct induction of affectively valenced subjective experience) and motivational manipulations (e.g., delivery of performance-contingent rewards and punishments) have similar or distinct effects on cognitive control. Prior work has suggested that reward motivation can reliably enhance a proactive mode of cognitive control, whereas other evidence is suggestive that positive emotion improves cognitive flexibility, but reduces proactive control. However, a limitation of the prior research is that reward motivation and positive emotion have largely been studied independently. Here, we directly compared the effects of positive emotion and reward motivation on cognitive control with a tightly matched, within-subjects design, using the AX-continuous performance task paradigm, which allows for relative measurement of proactive versus reactive cognitive control. High-resolution pupillometry was employed as a secondary measure of cognitive dynamics during task performance. Robust increases in behavioral and pupillometric indices of proactive control were observed with reward motivation. The effects of positive emotion were much weaker, but if anything, also reflected enhancement of proactive control, a pattern that diverges from some prior findings. These results indicate that reward motivation has robust influences on cognitive control, while also highlighting the complexity and heterogeneity of positive-emotion effects. The findings are discussed in terms of potential neurobiological mechanisms. © 2014 The Author(s). |
Joseph D. Chisholm; Alan Kingstone Knowing and avoiding: The influence of distractor awareness on oculomotor capture Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 5, pp. 1258–1264, 2014. @article{Chisholm2014, Kramer, Hahn, Irwin, and Theeuwes (2000) reported that the interfering effect of distractors is reduced when participants are aware of the to-be-ignored information. In contrast, recent evidence indicates that distractor interference increases when individuals are aware of the distractors. In the present investigation, we directly assessed the influence of distractor awareness on oculomotor capture, with the hope of resolving this contradiction in the literature and gaining further insight into the influence of awareness on attention. Participants completed a traditional oculomotor capture task. They were not informed of the presence of the distracting information (unaware condition), were informed of distractors (aware condition), or were informed of distractor information and told to avoid attending to it (avoid condition). Being aware of the distractors yielded a performance benefit, relative to the unaware condition; however, this benefit was eliminated when participants were told to actively avoid distraction. This pattern of results reconciles past contradictions in the literature and suggests an inverted-U function of awareness in distractor performance. Too little or too much emphasis yields a performance decrement, but an intermediate level of emphasis provides a performance benefit. |
Kyoung Whan Choe; Randolph Blake; Sang-Hun Lee Dissociation between neural signatures of stimulus and choice in population activity of human V1 during perceptual decision-making Journal Article In: Journal of Neuroscience, vol. 34, no. 7, pp. 2725–2743, 2014. @article{Choe2014, Primary visual cortex (V1) forms the initial cortical representation of objects and events in our visual environment, and it distributes information about that representation to higher cortical areas within the visual hierarchy. Decades of work have established tight linkages between neural activity occurring in V1 and features comprising the retinal image, but it remains debatable how that activity relates to perceptual decisions. An actively debated question is the extent to which V1 responses determine, on a trial-by-trial basis, perceptual choices made by observers. By inspecting the population activity of V1 from human observers engaged in a difficult visual discrimination task, we tested one essential prediction of the deterministic view: choice-related activity, if it exists in V1, and stimulus-related activity should occur in the same neural ensemble of neurons at the same time. Our findings do not support this prediction: while cortical activity signifying the variability in choice behavior was indeed found in V1, that activity was dissociated from activity representing stimulus differences relevant to the task, being advanced in time and carried by a different neural ensemble. The spatiotemporal dynamics of population responses suggest that short-term priors, perhaps formed in higher cortical areas involved in perceptual inference, act to modulate V1 activity prior to stimulus onset without modifying subsequent activity that actually represents stimulus features within V1. |
Jennie E. S. Choi; Pavan A. Vaswani; Reza Shadmehr Vigor of movements and the cost of time in decision making Journal Article In: Journal of Neuroscience, vol. 34, no. 4, pp. 1212–1223, 2014. @article{Choi2014, If we assume that the purpose of a movement is to acquire a rewarding state, the duration of the movement carries a cost because it delays acquisition of reward. For some people, passage of time carries a greater cost, as evidenced by how long they are willing to wait for a rewarding outcome. These steep discounters are considered impulsive. Is there a relationship between cost of time in decision making and cost of time in control of movements? Our theory predicts that people who are more impulsive should in general move faster than subjects who are less impulsive. To test our idea, we considered elementary voluntary movements: saccades of the eye. We found that in humans, saccadic vigor, assessed using velocity as a function of amplitude, was as much as 50% greater in one subject than another; that is, some people consistently moved their eyes with high vigor. We measured the cost of time in a decision-making task in which the same subjects were given a choice between smaller odds of success immediately and better odds if they waited. We measured how long they were willing to wait to obtain the better odds and how much they increased their wait period after they failed. We found that people that exhibited greater vigor in their movements tended to have a steep temporal discount function, as evidenced by their waiting patterns in the decision-making task. The cost of time may be shared between decision making and motor control. |
Mina Choi; Joel Wang; Wei Chung Cheng; Giovanni Ramponi; Luigi Albani; Aldo Badano Effect of veiling glare on detectability in high-dynamic-range medical images Journal Article In: IEEE/OSA Journal of Display Technology, vol. 10, no. 5, pp. 420–428, 2014. @article{Choi2014a, We describe a methodology for predicting the detectability of subtle targets in dark regions of high-dynamic-range (HDR) images in the presence of veiling glare in the human eye. The method relies on predictions of contrast detection thresholds for the human visual system within a HDR image based on psychophysics measurements and modeling of the HDR display device characteristics. We present experimental results used to construct the model and discuss an image-dependent empirical veiling glare model and the validation of the model predictions with test patterns, natural scenes, and medical images. The model predictions are compared to a previously reported model (HDR-VDP2) for predicting HDR image quality accounting for glare effects. © 2005-2012 IEEE. |
Wonil Choi; Rutvik H. Desai; John M. Henderson The neural substrates of natural reading: A comparison of normal and nonword text using eyetracking and fMRI Journal Article In: Frontiers in Human Neuroscience, vol. 8, pp. 1024, 2014. @article{Choi2014c, Most previous studies investigating the neural correlates of reading have presented text using serial visual presentation (SVP), which may not fully reflect the underlying processes of natural reading. In the present study, eye movements and BOLD data were collected while subjects either read normal paragraphs naturally or moved their eyes through "paragraphs" of pseudo-text (pronounceable pseudowords or consonant letter strings) in two pseudo-reading conditions. Eye movement data established that subjects were reading and scanning the stimuli normally. A conjunction fMRI analysis across natural- and pseudo-reading showed that a common eye-movement network including frontal eye fields (FEF), supplementary eye fields (SEF), and intraparietal sulci was activated, consistent with previous studies using simpler eye movement tasks. In addition, natural reading versus pseudo-reading showed different patterns of brain activation: normal reading produced activation in a well-established language network that included superior temporal gyrus/sulcus, middle temporal gyrus (MTG), angular gyrus (AG), inferior frontal gyrus, and middle frontal gyrus, whereas pseudo-reading produced activation in an attentional network that included anterior/posterior cingulate and parietal cortex. These results are consistent with results found in previous single-saccade eye movement tasks and SVP reading studies, suggesting that component processes of eye-movement control and language processing observed in past fMRI research generalize to natural reading. The results also suggest that combining eyetracking and fMRI is a suitable method for investigating the component processes of natural reading in fMRI research. |
Wonil Choi; Peter C. Gordon Word skipping during sentence reading: effects of lexicality on parafoveal processing Journal Article In: Attention, Perception, & Psychophysics, vol. 76, no. 1, pp. 201–213, 2014. @article{Choi2014b, Two experiments examined how lexical status affects the targeting of saccades during reading by using the boundary technique to vary independently the content of a letter string when seen in parafoveal preview and when directly fixated. Experiment 1 measured the skipping rate for a target word embedded in a sentence under three parafoveal preview conditions: full preview (e.g., brain-brain), pseudohomophone preview (e.g., brane-brain), and orthographic nonword control preview (e.g., brant-brain); in the first condition, the preview string was always an English word, while in the second and third conditions, it was always a nonword. Experiment 2 investigated three conditions where the preview string was always a word: full preview (e.g., beach-beach), homophone preview (e.g., beech-beach), and orthographic control preview (e.g., bench-beach). None of the letter string manipulations used to create the preview conditions in the experiments disrupted sublexical orthographic or phonological patterns. In Experiment 1, higher skipping rates were observed for the full (lexical) preview condition, which consisted of a word, than for the nonword preview conditions (pseudohomophone and orthographic control). In contrast, Experiment 2 showed no difference in skipping rates across the three types of lexical preview conditions (full, homophone, and orthographic control), although preview type did influence reading times. This pattern indicates that skipping not only depends on the presence of disrupted sublexical patterns of orthography or phonology, but also is critically dependent on processes that are sensitive to the lexical status of letter strings in the parafovea. |
Wing Yee Chow; Shevaun Lewis; Colin Phillips Immediate sensitivity to structural constraints in pronoun resolution Journal Article In: Frontiers in Psychology, vol. 5, pp. 630, 2014. @article{Chow2014, Real-time interpretation of pronouns is sometimes sensitive to the presence of grammatically-illicit antecedents and sometimes not. This occasional sensitivity has been taken as evidence that structural constraints do not immediately impact the initial antecedent retrieval for pronoun interpretation. We argue that it is important to separate effects that reflect the initial antecedent retrieval process from those that reflect later processes. We present results from five reading comprehension experiments. Both the current results and previous evidence support the hypothesis that agreement features and structural constraints immediately constrain the antecedent retrieval process for pronoun interpretation. Occasional sensitivity to grammatically-illicit antecedents may be due to repair processes triggered when the initial retrieval fails to return a grammatical antecedent. |
T. Chuk; A. B. Chan; J. H. Hsiao Understanding eye movements in face recognition using hidden Markov models Journal Article In: Journal of Vision, vol. 14, no. 11, pp. 1–14, 2014. @article{Chuk2014, We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone. |
Francesca Ciardo; Barbara F. M. Marino; Rossana Actis-Grosso; Angela Rossetti; Paola Ricciardelli Face age modulates gaze following in young adults Journal Article In: Scientific Reports, vol. 4, pp. 4746, 2014. @article{Ciardo2014, Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18-25; 35-45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6-10; 18-25; 35-45; over 70). The results show that gaze following was modulated by the distracter face age only for young adults. Particularly, the over 70 year-old distracters exerted the least interference effect. The distracters of a similar age-range as the young adults (18-25; 35-45) had the most effect, indicating a blurred own-age bias (OAB) only for the young age group. These findings suggest that face age can modulate gaze following, but this modulation could be due to factors other than just OAB (e.g., familiarity). |
Moreno I. Coco; Frank Keller Classification of visual and linguistic tasks using eye-movement features Journal Article In: Journal of Vision, vol. 14, no. 3, pp. 1–18, 2014. @article{Coco2014, The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description). |
Moreno I. Coco; George L. Malcolm; Frank Keller The interplay of bottom-up and top-down mechanisms in visual guidance during object naming Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 6, pp. 1096–1120, 2014. @article{Coco2014a, An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention. |
Michael Coesmans; Christian H. Röder; Albertine E. Smit; Sebastiaan K. E. Koekkoek; Chris I. De Zeeuw; Maarten A. Frens; Josef N. Geest Cerebellar motor learning deficits in medicated and medication-free men with recent-onset schizophrenia Journal Article In: Journal of Psychiatry and Neuroscience, vol. 39, no. 1, pp. 3–11, 2014. @article{Coesmans2014, BACKGROUND: The notion that cerebellar deficits may underlie clinical symptoms in people with schizophrenia is tested by evaluating 2 forms of cerebellar learning in patients with recent-onset schizophrenia. A potential medication effect is evaluated by including patients with or without antipsychotics. METHODS: We assessed saccadic eye movement adaptation and eyeblink conditioning in men with recent-onset schizophrenia who were taking antipsychotic medication or who were antipsychotic-free and in age-matched controls. RESULTS: We included 39 men with schizophrenia (10 who were taking clozapine, 16 who were taking haloperidol and 13 who were antipsychotic-free) and 29 controls in our study. All participants showed significant saccadic adaptation. Adaptation strength did not differ between healthy controls and men with schizophrenia. The speed of saccade adaptation, however, was significantly lower in men with schizophrenia. They showed a significantly lower increase in the number of conditioned eyeblink responses. Over all experiments, no consistent effects of medication were observed. These outcomes did not correlate with age, years of education, psychopathology or dose of antipsychotics. LIMITATIONS: As patients were not randomized for treatment, an influence of confounding variables associated with medication status cannot be excluded. Individual patients also varied along the schizophrenia spectrum despite the relative homogeneity with respect to onset of illness and short usage of medication. Finally, the relatively small number of participants may have concealed effects as a result of insufficient statistical power. CONCLUSION: We found several cerebellar learning deficits in men with schizophrenia that we cannot attribute to the use of antipsychotics. Although this finding, combined with the fact that deficits are already present in patients with recent-onset schizophrenia, could suggest that cerebellar impairments are a trait deficit in people with schizophrenia. This should be confirmed in longitudinal studies. |
Andrew L. Cohen; Adrian Staub Online processing of novel noun-noun compounds: Eye movement evidence Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 1, pp. 147–165, 2014. @article{Cohen2014, Three eye-tracking experiments investigated online processing of novel noun-noun compounds. The experiments compared processing of compounds that are difficult to interpret in isolation (e.g., dictionary treatment) and more easily interpretable adjective-noun and noun-noun sequences (e.g., rough treatment and torture treatment). In all three experiments, first-pass reading time was longer on the head noun (treatment) when it occurred in a difficult compound. Further, a preceding sentence that provided a potential interpretation of the critical compound reduced processing difficulty, but this modulation by context occurred in later eye movement measures, or downstream of the compound itself. These results are interpreted in relation to the eye movement literature on the processing of implausibility, which demonstrates a similar pattern in which the disruption in early eye movement measures is not alleviated by context, but context does have a later effect. The results also suggest that the interpretation of noun-noun compounds in context does initially depend on the availability of an out-of-context interpretation. |
Derya Çokal; Patrick Sturt; Fernanda Ferreira Deixis: This and that in written narrative discourse Journal Article In: Discourse Processes, vol. 51, no. 3, pp. 201–229, 2014. @article{Cokal2014, The existing literature presents conflicting models of how this and that access different segments of a written discourse, frequently relying on implicit analogies with spoken discourse. On the basis of this literature, we hypothesized that in written discourse, this more readily accesses the adjacent/right frontier of a preceding chunk of text, whereas that more readily accesses the distant/left. We tested this hypothesis in two eye-tracking experiments, one sentence completion experiment, and one corpus study. Our results showed that both this and that access the adjacent frontier more easily than the distant. Contrary to existing theories, this accessed the distant frontier more frequently and easily than that. We propose a processing model integrating segmented discourse representation theory's concept of the left/distant leaf with Grosz and Sidner's attentional and intentional model and Garrod and Sandford's focus framework model, suggesting an important role for working memory and emphasizing the different production modes of readers and writers. |
Thérèse Collins Trade-off between spatiotopy and saccadic plasticity Journal Article In: Journal of Vision, vol. 4, no. 12, pp. 1–14, 2014. @article{Collins2014, Saccadic eye movements bring objects of interest onto the high-resolution fovea. They also change the retinal location of objects, but our impression of the visual world is stable: We represent our visual world in spatiotopic coordinates. Visual stability could be the result of a null hypothesis that things do not move during a saccade, or of realigning retinal images based on an internal copy of the eye movement (remapping). The current studies disentangled these hypotheses. Subjects saccaded to peripheral targets that were displaced by different amounts during execution, and detected or discriminated displacement direction. Evidence for a null hypothesis was provided by the relatively poor perceptual performance, and evidence for remapping by the independence of performance from saccade endpoint. There was a trade-off between spatiotopic performance and saccadic plasticity: Good performance (perception of displacements) led to small compensative modifications in saccade amplitude on the next trial while poor performance led to larger modifications. Results also showed that variations in saccade amplitude also depended on the size of the retinal error and of the saccade itself. These results suggest that when faced with a discrepancy between the saccade endpoint and visual target, the visual system attributes the discrepancy to object displacement or to saccade error, in which case the subsequent saccade is corrected. This result reconciles the two hypotheses by suggesting that accurate remapping serves oculomotor accuracy but not visual stability. Internal copies of eye movements may thus be used separately to establish spatiotopic representations and to maintain oculomotor accuracy. |
Katrin Preckel; Karlijn Massar Imprinting effects on visual attention to faces and judgments of attractiveness Journal Article In: EvoS Journal, vol. 6, no. 2, pp. 1–16, 2014. @article{Preckel2014, Previous studies have shown that human mate-choice can be influenced by exposure to opposite-sex parent characteristics. In this study we examined whether there are sexual-imprinting effects of fathers on their daughter's partner-choice. To this end our participants were asked to bring a picture of their father to the laboratory, and next an eye-tracker was used to determine participants' gaze directions while they were judging male faces for attractiveness. Participants were single, female undergraduates (n = 50, M age = 22 |
Katrin H. Preller; Marcus Herdener; Leonhard Schilbach; Philliipp Stampfli; Lea M. Hulka; Matthias Vonmoos; Nina Ingold; Kai Vogeley; Philippe N. Tobler; Erich Seifritz; Boris B. Quednow Functional changes of the reward system underlie blunted response to social gaze in cocaine users Journal Article In: Proceedings of the National Academy of Sciences, vol. 111, no. 7, pp. 2842–2847, 2014. @article{Preller2014, Social interaction deficits in drug users likely impede treatment, increase the burden of the affected families, and consequently contribute to the high costs for society associated with addiction. Despite its significance, the neural basis of altered social interaction in drug users is currently unknown. Therefore, we investigated basal social gaze behavior in cocaine users by applying behavioral, psychophysiological, and functional brain-imaging methods. In study I, 80 regular cocaine users and 63 healthy controls completed an interactive paradigm in which the participants' gaze was recorded by an eye-tracking device that controlled the gaze of an anthropomorphic virtual character. Valence ratings of different eye-contact conditions revealed that cocaine users show diminished emotional engagement in social interaction, which was also supported by reduced pupil responses. Study II investigated the neural underpinnings of changes in social reward processing observed in study I. Sixteen cocaine users and 16 controls completed a similar interaction paradigm as used in study I while undergoing functional magnetic resonance imaging. In response to social interaction, cocaine users displayed decreased activation of the medial orbitofrontal cortex, a key region of reward processing. Moreover, blunted activation of the medial orbitofrontal cortex was significantly correlated with a decreased social network size, reflecting problems in real-life social behavior because of reduced social reward. In conclusion, basic social interaction deficits in cocaine users as observed here may arise from altered social reward processing. Consequently, these results point to the importance of reinstatement of social reward in the treatment of stimulant addiction. |
Elsie Premereur; Wim Vanduffel; Peter Janssen The effect of FEF microstimulation on the responses of neurons in the lateral intraparietal area Journal Article In: Journal of Cognitive Neuroscience, vol. 26, no. 8, pp. 1672–1684, 2014. @article{Premereur2014, The macaque FEFs and the lateral intraparietal area (LIP) are high-level cortical areas involved in both spatial attention and oculomotor behavior. Stimulating FEF at a level below the threshold for evoking saccades increases fMRI activity and gamma power in area LIP, but the precise effect exerted by the FEF on LIP neurons is unknown. In our study, we recorded LIP single-unit activity during a visually guided saccade task with a peripherally presented go signal during microstimulation of FEF. We found that FEF microstimulation increased the LIP spike rate imme- diately after the highly salient go signal inside the LIP receptive field when both target and go signal were presented inside the receptive field, and no other possible go cues were present on the screen. The effect of FEF microstimulation on the LIP response was positive until at least 800msec aftermicrostimulation had ceased, but reversed for longer trial durations. Therefore, FEF microstimulation can modulate the LIP spike rate only when attention is selectively directed toward the stimulated location. These results provide the first direct evidence for LIP spike rate modulations caused by FEF microstimulation, thus showing that FEF activity can be the source of top–down control ofarea LIP. |
Nicholas S. C. Price; J. Blum Motion perception correlates with volitional but not reflexive eye movements Journal Article In: Neuroscience, vol. 277, pp. 435–445, 2014. @article{Price2014, Visually-driven actions and perception are traditionally ascribed to the dorsal and ventral visual streams of the cortical processing hierarchy. However, motion perception and the control of tracking eye movements both depend on sensory motion analysis by neurons in the dorsal stream, suggesting that the same sensory circuits may underlie both action and perception. Previous studies have suggested that multiple sensory modules may be responsible for the perception of low- and high-level motion, or the detection versus identification of motion direction. However, it remains unclear whether the sensory processing systems that contribute to direction perception and the control of eye movements have the same neuronal constraints. To address this, we examined inter-individual variability across 36 observers, using two tasks that simultaneously assessed the precision of eye movements and direction perception: in the smooth pursuit task, observers volitionally tracked a small moving target and reported its direction; in the ocular following task, observers reflexively tracked a large moving stimulus and reported its direction. We determined perceptual-oculomotor correlations across observers, defined as the correlation between each observer's mean perceptual precision and mean oculomotor precision. Across observers, we found that: (i) mean perceptual precision was correlated between the two tasks; (ii) mean oculomotor precision was correlated between the tasks, and (iii) oculomotor and perceptual precision were correlated for volitional smooth pursuit, but not reflexive ocular following. Collectively, these results demonstrate that sensory circuits with common neuronal constraints subserve motion perception and volitional, but not reflexive eye movements. |
Heinz-Werner Priess; Nils Heise; Florian Fischmeister; Sabine Born; Herbert Bauer; Ulrich Ansorge Attentional capture and inhibition of saccades after irrelevant and relevant cues Journal Article In: Journal of Ophthalmology, pp. 1–12, 2014. @article{Priess2014, Attentional capture is usually stronger for task-relevant than irrelevant stimuli, whereas irrelevant stimuli can trigger equal or even stronger amounts of inhibition than relevant stimuli. Capture and inhibition, however, are typically assessed in separate trials, leaving it open whether or not inhibition of irrelevant stimuli is a consequence of preceding attentional capture by the same stimuli or whether inhibition is the only response to these stimuli. Here, we tested the relationship between capture and inhibition in a setup allowing for estimates of the capture and inhibition based on the very same trials. We recorded saccadic inhibition after relevant and irrelevant stimuli. At the same time, we recorded the N2pc, an event-related potential, reflecting initial capture of attention. We found attentional capture not only for, relevant but importantly also for irrelevant stimuli, although the N2pc was stronger for relevant than irrelevant stimuli. In addition, inhibition of saccades was the same for relevant and irrelevant stimuli. We conclude with a discussion of the mechanisms that are responsible for these effects. |
Claudio M. Privitera; Thom Carney; Stanley A. Klein; Mario Aguilar Analysis of microsaccades and pupil dilation reveals a common decisional origin during visual search Journal Article In: Vision Research, vol. 95, pp. 43–50, 2014. @article{Privitera2014, During free viewing visual search, observers often refixate the same locations several times before and after target detection is reported with a button press. We analyzed the rate of microsaccades in the sequence of refixations made during visual search and found two important components. One related to the visual content of the region being fixated; fixations on targets generate more microsaccades and more microsaccades are generated for those targets that are more difficult to disambiguate. The other empathizes non-visual decisional processes; fixations containing the button press generate more microsaccades than those made on the same target but without the button press. Pupil dilation during the same refixations reveals a similar modulation. We inferred that generic sympathetic arousal mechanisms are part of the articulated complex of perceptual processes governing fixational eye movements. |
Liina Pylkkänen; Douglas K. Bemis; Estibaliz Blanco Elorrieta Building phrases in language production: An MEG study of simple composition Journal Article In: Cognition, vol. 133, no. 2, pp. 371–384, 2014. @article{Pylkkaenen2014, Although research on language production has developed detailed maps of the brain basis of single word production in both time and space, little is known about the spatiotemporal dynamics of the processes that combine individual words into larger representations during production. Studying composition in production is challenging due to difficulties both in controlling produced utterances and in measuring the associated brain responses. Here, we circumvent both problems using a minimal composition paradigm combined with the high temporal resolution of magnetoencephalography (MEG). With MEG, we measured the planning stages of simple adjective-noun phrases ('red tree'), matched list controls ('red, blue'), and individual nouns ('tree') and adjectives ('red'), with results indicating combinatorial processing in the ventro-medial prefrontal cortex (vmPFC) and left anterior temporal lobe (LATL), two regions previously implicated for the comprehension of similar phrases. These effects began relatively quickly (~180 ms) after the presentation of a production prompt, suggesting that combination commences with initial lexical access. Further, while in comprehension, vmPFC effects have followed LATL effects, in this production paradigm vmPFC effects occurred mostly in parallel with LATL effects, suggesting that a late process in comprehension is an early process in production. Thus, our results provide a novel neural bridge between psycholinguistic models of comprehension and production that posit functionally similar combinatorial mechanisms operating in reversed order. |
Weston Pack; Stanley A. Klein; Thom Carney Bias corrected double judgment accuracy during spatial attention cueing: Unmasked stimuli with non-predictive and semi-predictive cues Journal Article In: Vision Research, vol. 105, pp. 213–225, 2014. @article{Pack2014, The present experiments indicate that in a 7-AFC double judgment accuracy task with unmasked stimuli, cue location response bias can be quantified and removed, revealing unbiased improvements in response accuracy for valid cues compared to invalid cues. By testing for cueing effects over a range of contrast levels with unmasked stimuli, changes in the psychometric function were examined and provide insight into the mechanisms of involuntary attention which might account for the observed cueing effects. Cue validity was varied between two separate experiments showing that non-predictive (14.3%) and moderately-predictive cues (50%) equally facilitate stimulus identification and localization during transient involuntary attention capture. Observers had improved accuracy at identifying both the location and the feature identity of target letters throughout a range of contrast levels, without any dependence on backward masking. There was a leftward shift of the psychometric function threshold with valid cued data and no slope reduction suggesting that any additive hypothesis based on spatial uncertainty reduction or perceptual enhancement is not a sufficient explanation for the observed cueing effects. The interdependence of the perceptual processes of stimulus discrimination and localization were also investigated by analyzing response contingencies, showing that observers were equally skilled at making identification and localization accuracy judgments with unmasked stimuli. |
Weston Pack; Stanley A. Klein; Thom Carney Bias-free double judgment accuracy during spatial attention cueing: Performance enhancement from voluntary and involuntary attention Journal Article In: Vision Research, vol. 105, pp. 204–212, 2014. @article{Pack2014a, Recent research has demonstrated that involuntary attention improves target identification accuracy for letters using non-predictive peripheral cues, helping to resolve some of the controversy over performance enhancement from involuntary attention. While various cueing studies have demonstrated that their reported cueing effects were not due to response bias to the cue, very few investigations have quantified the extent of any response bias or developed methods of removing bias from observed results in a double judgment accuracy task. We have devised a method to quantify and remove response bias to cued locations in a double judgment accuracy cueing task, revealing the true, unbiased performance enhancement from involuntary and voluntary attention. In a 7-alternative forced choice cueing task using backward masked stimuli to temporally constrain stimulus processing, non-predictive cueing increased target detection and discrimination at cued locations relative to uncued locations even after cue location bias had been corrected. |
Céline Paeye; Laurent Madelain Reinforcing saccadic amplitude variability in a visual search task Journal Article In: Journal of Vision, vol. 14, no. 13, pp. 1–18, 2014. @article{Paeye2014, Human observers often adopt rigid scanning strategies in visual search tasks, even though this may lead to suboptimal performance. Here we ask whether specific levels of saccadic amplitude variability may be induced in a visual search task using reinforcement learning. We designed a new gaze-contingent visual foraging task in which finding a target among distractors was made contingent upon specific saccadic amplitudes. When saccades of rare amplitudes led to displaying the target, the U values (measuring uncertainty) increased by 54.89% on average. They decreased by 41.21% when reinforcing frequent amplitudes. In a noncontingent control group no consistent change in variability occurred. A second experiment revealed that this learning transferred to conventional visual search trials. These results provide experimental support for the importance of reinforcement learning for saccadic amplitude variability in visual search. |
Adam Palanica; Roxane J. Itier Effects of peripheral eccentricity and head orientation on gaze discrimination Journal Article In: Visual Cognition, vol. 22, no. 9-10, pp. 1216–1232, 2014. @article{Palanica2014, Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgement tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgements, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity. |
Jinger Pan; Ming Yan; Jochen Laubrock; Hua Shu; Reinhold Kliegl Saccade-target selection of dyslexic children when reading Chinese Journal Article In: Vision Research, vol. 97, pp. 24–30, 2014. @article{Pan2014, This study investigates the eye movements of dyslexic children and their age-matched controls when reading Chinese. Dyslexic children exhibited more and longer fixations than age-matched control children, and an increase of word length resulted in a greater increase in the number of fixations and gaze durations for the dyslexic than for the control readers. The report focuses on the finding that there was a significant difference between the two groups in the fixation landing position as a function of word length in single-fixation cases, while there was no such difference in the initial fixation of multi-fixation cases. We also found that both groups had longer incoming saccade amplitudes while the launch sites were closer to the word in single fixation cases than in multi-fixation cases. Our results suggest that dyslexic children's inefficient lexical processing, in combination with the absence of orthographic word boundaries in Chinese, leads them to select saccade targets at the beginning of words conservatively. These findings provide further evidence for parafoveal word segmentation during reading of Chinese sentences. |
Xiaochuan Pan; Hongwei Fan; Kosuke Sawa; Ichiro Tsuda; Minoru Tsukada; Masamichi Sakagami Reward inference by primate prefrontal and striatal neurons Journal Article In: Journal of Neuroscience, vol. 34, no. 4, pp. 1380–1396, 2014. @article{Pan2014a, The brain contains multiple yet distinct systems involved in reward prediction. To understand the nature of these processes, we recorded single-unit activity from the lateral prefrontal cortex (LPFC) and the striatum in monkeys performing a reward inference task using an asymmetric reward schedule. We found that neurons both in the LPFC and in the striatum predicted reward values for stimuli that had been previously well experienced with set reward quantities in the asymmetric reward task. Importantly, these LPFC neurons could predict the reward value of a stimulus using transitive inference even when the monkeys had not yet learned the stimulus–reward association directly; whereas these striatal neurons did not show such an ability. Nevertheless, because there were two set amounts of reward (large and small), the selected striatal neurons were able to exclusively infer the reward value (e.g., large) of one novel stimulus from a pair after directly experiencing the alternative stimulus with the other reward value (e.g., small). Our results suggest that although neurons that predict reward value for old stimuli in the LPFC could also do so for new stimuli via transitive inference, those in the striatum could only predict reward for new stimuli via exclusive inference. Moreover, the striatum showed more complex functions than was surmised previously for model-free learning. |
Pierpaolo Pani; Tom Theys; Maria C. Romero; Peter Janssen Grasping execution and grasping observation activity of single neurons in the macaque anterior intraparietal area Journal Article In: Journal of Cognitive Neuroscience, vol. 26, no. 10, pp. 2342–2355, 2014. @article{Pani2014, Primates use vision to guide their actions in everyday life. Visually guided object grasping is known to rely on a network of cortical areas located in the parietal and premotor cortex. We recorded in the anterior intraparietal area (AIP), an area in the dorsal visual stream that is critical for object grasping and densely connected with the premotor cortex, while monkeys were grasping objects under visual guidance and during passive fixation of videos of grasping actions from the first-person perspective. All AIP neurons in this study responded during grasping execution in the light, that is, became more active after the hand had started to move toward the object and during grasping in the dark. More than half of these AIP neurons responded during the observation of a video of the same grasping actions on a display. Furthermore, these AIP neurons responded as strongly during passive fixation of movements of a hand on a scrambled background and to a lesser extent to a shape appearing within the visual field near the object. Therefore, AIP neurons responding during grasping execution also respond during passive observation of grasping actions and most of them even during passive observation of movements of a simple shape in the visual field. |
Sebastian Pannasch; Jens R. Helmert; Bruce C. Hansen; M. Adam; Lester C. Loschky Commonalities and differences in eye movement behavior when exploring aerial and terrestrial scenes Journal Article In: Cartography from Pole to Pole, pp. 421–430, 2014. @article{Pannasch2014, Eye movements can provide fast and precise insights into ongoing mechanisms of attention and information processing. In free exploration of natural scenes, it has repeatedly been shown that fixation durations increase over time, while saccade amplitudes decrease. This gaze behavior has been explained as a shift from ambient (global) to focal (local) processing as a means to efficiently understand different environments. In the current study, we analyzed eye movement behavior during the inspection of terrestrial and aerial views of real-world scene images. Our results show that the ambient to focal strategy is preserved across both perspectives. However, there are several perspective-related differences: For aerial views, the first fixation duration is prolonged, showing immediate processing difficulties. Furthermore, fixation durations and saccade amplitudes are longer throughout the overall time of scene exploration, showing continued difficulties that affect both processing of information and image scanning strategies. The temporal and spatial scanning of aerial views is also less similar between observers than for terrestrial scenes, suggesting an inability to use normal scanning patterns. The observed differences in eye movement behavior when inspecting terrestrial and aerial views suggest an increased processing effort for visual information that deviates from our everyday experiences. |
Muriel T. N. Panouillères; Ouazna Habchi; Peggy Gerardin; Roméo Salemme; Christian Urquizar; Alessandro Farnè; Denis Pélisson A role for the parietal cortex in sensorimotor adaptation of saccades Journal Article In: Cerebral Cortex, vol. 24, no. 2, pp. 304–314, 2014. @article{Panouilleres2014, Sensorimotor adaptation ensures movement accuracy despite continuously changing environment and body. Adaptation of saccadic eye movements is a classical model of sensorimotor adaptation. Beside the well-established role of the brainstem-cerebellum in the adaptation of reactive saccades (RSs), the cerebral cortex has been suggested to be involved in the adaptation of voluntary saccades (VSs). Here, we provide direct evidence for a causal involvement of the parietal cortex in saccadic adaptation. First, the posterior intraparietal sulcus (pIPS) was identified in each subject using functional magnetic resonance imaging (fMRI). Then, a saccadic adaptation paradigm was used to progressively reduce the amplitude of RSs and VSs, while single-pulse transcranial magnetic stimulation (spTMS) was applied over the right pIPS. The perturbations of pIPS resulted in impairment for the adaptation of VSs, selectively when spTMS was applied 60 ms after saccade onset. In contrast, the adaptation of RSs was facilitated by spTMS applied 90 ms after saccade initiation. The differential effect of spTMS relative to saccade types suggests a direct interference with pIPS activity for the VS adaptation and a remote interference with brainstem-cerebellum activity for the RS adaptation. These results support the hypothesis that the adaptation of VSs and RSs involves different neuronal substrates. |
Angelina Paolozza; Carmen Rasmussen; Jacqueline Pei; Ana Hanlon-Dearman; Sarah M. Nikkel; Gail Andrew; Audrey McFarlane; Dawa Samdup; James N. Reynolds Working memory and visuospatial deficits correlate with oculomotor control in children with fetal alcohol spectrum disorder Journal Article In: Behavioural Brain Research, vol. 263, pp. 70–79, 2014. @article{Paolozza2014a, Previous studies have demonstrated that children with Fetal Alcohol Spectrum Disorder (FASD) exhibit deficits in measures of eye movement control that probe aspects of visuospatial processing and working memory. The goal of the present study was to examine, in a large cohort of children with FASD, prenatal alcohol exposure (PAE) but not FASD, and typically developing control children, the relationship between performance in eye movement tasks and standardized psychometric tests that assess visuospatial processing and working memory. Participants for this dataset were drawn from a large, multi-site investigation, and included children and adolescents aged 5-17 years diagnosed with an FASD (n=71), those with PAE but no clinical FASD diagnosis (n=20), and typically developing controls (n=111). Participants completed a neurobehavioral test battery and a series of saccadic eye movement tasks. The FASD group performed worse than controls on the psychometric and eye movement measures of working memory and visuospatial skills. Within the FASD group, digit recall, block recall, and animal sorting were negatively correlated with sequence errors on the memory-guided task, and arrows was negatively correlated with prosaccade endpoint error. There were no significant correlations in the control group. These data suggest that psychometric tests and eye movement control tasks may assess similar domains of cognitive function, and these assessment tools may be measuring overlapping brain regions damaged due to prenatal alcohol exposure. The results of this study demonstrate that eye movement control tasks directly relate to outcome measures obtained with psychometric tests and are able to assess multiple domains of cognition simultaneously, thereby allowing for an efficient and accurate assessment. |
Angelina Paolozza; Carmen Rasmussen; Jacqueline Pei; Ana Hanlon-Dearman; Sarah M. Nikkel; Gail Andrew; Audrey McFarlane; Dawa Samdup; James N. Reynolds Deficits in response inhibition correlate with oculomotor control in children with fetal alcohol spectrum disorder and prenatal alcohol exposure Journal Article In: Behavioural Brain Research, vol. 259, pp. 97–105, 2014. @article{Paolozza2014, Children with fetal alcohol spectrum disorder (FASD) or prenatal alcohol exposure (PAE) frequently exhibit impairment on tasks measuring inhibition. The objective of this study was to determine if a performance-based relationship exists between psychometric tests and eye movement tasks in children with FASD. Participants for this dataset were aged 5-17 years and included those diagnosed with an FASD (n= 72), those with PAE but no clinical FASD diagnosis (n= 21), and typically developing controls (n= 139). Participants completed a neurobehavioral test battery, which included the NEPSY-II subtests of auditory attention, response set, and inhibition. Each participant completed a series of saccadic eye movement tasks, which included the antisaccade and memory-guided tasks. Both the FASD and the PAE groups performed worse than controls on the subtest measures of attention and inhibition. Compared with controls, the FASD group made more errors on the antisaccade and memory-guided tasks. Among the combined FASD/PAE group, inhibition and switching errors were negatively correlated with direction errors on the antisaccade task but not on the memory-guided task. There were no significant correlations in the control group. These data suggests that response inhibition deficits in children with FASD/PAE are associated with difficulty controlling saccadic eye movements which may point to overlapping brain regions damaged by prenatal alcohol exposure. The results of this study demonstrate that eye movement control tasks directly relate to outcome measures obtained with psychometric tests that are used during FASD diagnosis, and may therefore help with early identification of children who would benefit from a multidisciplinary diagnostic assessment. |
Angelina Paolozza; Sarah Treit; Christian Beaulieu; James N. Reynolds In: NeuroImage: Clinical, vol. 5, pp. 53–61, 2014. @article{Paolozza2014b, Response inhibition is the ability to suppress irrelevant impulses to enable goal-directed behavior. The underlying neural mechanisms of inhibition deficits are not clearly understood, but may be related to white matter connectivity, which can be assessed using diffusion tensor imaging (DTI). The goal of this study was to investigate the relationship between response inhibition during the performance of saccadic eye movement tasks and DTI measures of the corpus callosum in children with or without Fetal Alcohol Spectrum Disorder (FASD). Participants included 43 children with an FASD diagnosis (12.3 ± 3.1 years old) and 35 typically developing children (12.5 ± 3.0 years old) both aged 7-18, assessed at three sites across Canada. Response inhibition was measured by direction errors in an antisaccade task and timing errors in a delayed memory-guided saccade task. Manual deterministic tractography was used to delineate six regions of the corpus callosum and calculate fractional anisotropy (FA), mean diffusivity (MD), parallel diffusivity, and perpendicular diffusivity. Group differences in saccade measures were assessed using t-tests, followed by partial correlations between eye movement inhibition scores and corpus callosum FA and MD, controlling for age. Children with FASD made more saccade direction errors and more timing errors, which indicates a deficit in response inhibition. The only group difference in DTI metrics was significantly higher MD of the splenium in FASD compared to controls. Notably, direction errors in the antisaccade task were correlated negatively to FA and positively to MD of the splenium in the control, but not the FASD group, which suggests that alterations in connectivity between the two hemispheres of the brain may contribute to inhibition deficits in children with FASD. |
Eleni Papageorgiou; Rebecca J. McLean; Irene Gottlob Nystagmus in childhood Journal Article In: Pediatrics and Neonatology, vol. 55, no. 5, pp. 341–351, 2014. @article{Papageorgiou2014, Nystagmus is an involuntary rhythmic oscillation of the eyes, which leads to reduced visual acuity due to the excessive motion of images on the retina. Nystagmus can be grouped into infantile nystagmus (IN), which usually appears in the first 3-6 months of life, and acquired nystagmus (AN), which appears later. IN can be idiopathic or associated to albinism, retinal disease, low vision, or visual deprivation in early life, for example due to congenital cataracts, optic nerve hypoplasia, and retinal dystrophies, or it can be part of neurological syndromes and neurologic diseases. It is important to differentiate between infantile and acquired nystagmus. This can be achieved by considering not only the time of onset of the nystagmus, but also the waveform characteristics of the nystagmus. Neurological disease should be suspected when the nystagmus is asymmetrical or unilateral. Electrophysiology, laboratory tests, neurological, and imaging work-up may be necessary, in order to exclude any underlying ocular or systemic pathology in a child with nystagmus. Furthermore, the recent introduction of hand-held spectral domain optical coherence tomography (HH SD-OCT) provides detailed assessment of foveal structure in several pediatric eye conditions associated with nystagmus and it can been used to determine the underlying cause of infantile nystagmus. Additionally, the development of novel methods to record eye movements can help to obtain more detailed information and assist the diagnosis. Recent advances in the field of genetics have identified the FRMD7 gene as the major cause of hereditary X-linked nystagmus, which will possibly guide research towards gene therapy in the future. Treatment options for nystagmus involve pharmacological and surgical interventions. Clinically proven pharmacological treatments for nystagmus, such as gabapentin and memantine, are now beginning to emerge. In cases of obvious head posture, eye muscle surgery can be performed to shift the null zone of the nystagmus into the primary position, and also to alleviate neck problems that can arise due to an abnormal head posture. |
Carolyn Quam; Daniel Swingley Processing of lexical stress cues by young children Journal Article In: Journal of Experimental Child Psychology, vol. 123, no. 1, pp. 73–89, 2014. @article{Quam2014, Although infants learn an impressive amount about their native-language phonological system by the end of the first year of life, after the first year children still have much to learn about how acoustic dimensions cue linguistic categories in fluent speech. The current study investigated what children have learned about how the acoustic dimension of pitch indicates the location of the stressed syllable in familiar words. Preschoolers (2.5- to 5-year-olds) and adults were tested on their ability to use lexical-stress cues to identify familiar words. Both age groups saw pictures of a bunny and a banana and heard versions of "bunny" and "banana" in which stress either was indicated normally with convergent cues (pitch, duration, amplitude, and vowel quality) or was manipulated such that only pitch differentiated the words' initial syllables. Adults (n=48) used both the convergent cues and the isolated pitch cue to identify the target words as they unfolded. Children (n=206) used the convergent stress cues but not pitch alone in identifying words. We discuss potential reasons for children's difficulty in exploiting isolated pitch cues to stress despite children's early sensitivity to pitch in language. These findings contribute to a view in which phonological development progresses toward the adult state well past infancy. |
A. P. Raghuraman; Camillo Padoa-Schioppa Integration of multiple determinants in the neuronal computation of economic values Journal Article In: Journal of Neuroscience, vol. 34, no. 35, pp. 11583–11603, 2014. @article{Raghuraman2014, Economic goods may vary on multiple dimensions (determinants). A central conjecture in decision neuroscience is that choices between goods are made by comparing subjective values computed through the integration of all relevant determinants. Previous work identified three groups of neurons in the orbitofrontal cortex (OFC) of monkeys engaged in economic choices: (1) offer value cells, which encode the value of individual offers; (2) chosen value cells, which encode the value of the chosen good; and (3) chosen juice cells, which encode the identity of the chosen good. In principle, these populations could be sufficient to generate a decision. Critically, previous work did not assess whether offer value cells (the putative input to the decision) indeed encode subjective values as opposed to physical properties of the goods, and/or whether offer value cells integrate multiple determinants. To address these issues, we recorded from the OFC while monkeys chose between risky outcomes. Confirming previous observations, three populations of neurons encoded the value of individual offers, the value of the chosen option, and the value-independent choice outcome. The activity of both offer value cells and chosen value cells encoded values defined by the integration of juice quantity and probability. Furthermore, both populations reflected the subjective risk attitude of the animals. We also found additional groups of neurons encoding the risk associated with a particular option, the risky nature of the chosen option, and whether the trial outcome was positive or negative. These results provide substantial support for the conjecture described above and for the involvement of OFC in good-based decisions. |
Anis Rahman; Denis Pellerin; Dominique Houzet Influence of number, location and size of faces on gaze in video Journal Article In: Journal of Eye Movement Research, vol. 7, no. 2, pp. 1–11, 2014. @article{Rahman2014, Many studies have reported the preference for faces and influence of faces on gaze, most of them in static images and a few in videos. In this paper, we study the influence of faces in complex free-viewing videos, with respect to the effects of number, location and size of the faces. This knowledge could be used to enrich a face pathway in a visual saliency model. We used eye fixation data from an eye movement experiment, hand-labeled all the faces in the videos watched, and compared the labeled face regions against the eye fixations. We observed that fixations made are in proximity to, or inside the face regions. We found that 50% of the fixations landed directly on face regions that occupy less than 10% of the entire visual scene. Moreover, the fixation duration on videos with face is longer than without face, and longer than fixation duration on static images with faces. Finally, we analyzed the three influencing factors (Eccentricity, Area, Closeness) with linear regression models. For one face, the E +A combined model is slightly better than the E model and better than the A model. For two faces, the three variables (E,A,C) are tightly coupled and the E +A+C model had the highest score. |
Brandon C. W. Ralph; Paul Seli; Vivian O. Y. Cheng; Grayden J. F. Solman; Daniel Smilek Running the figure to the ground: Figure-ground segmentation during visual search Journal Article In: Vision Research, vol. 97, pp. 65–73, 2014. @article{Ralph2014, We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. |
Gary E. Raney; Spencer J. Campbell; Joanna C. Bovee Using eye movements to evaluate the cognitive processes involved in text comprehension Journal Article In: Journal of Visualized Experiments, no. 83, pp. 1–7, 2014. @article{Raney2014, The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli. |