All EyeLink Eye Tracker Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2015 |
Gerard M. Loughnane; John P. Shanley; Edmund C. Lalor; Redmond G. O'Connell Behavioral and electrophysiological evidence of opposing lateral visuospatial asymmetries in the upper and lower visual fields Journal Article In: Cortex, vol. 63, pp. 220–231, 2015. @article{Loughnane2015, Neurologically healthy individuals typically exhibit a subtle bias towards the left visual field during spatial judgments, known as "pseudoneglect". However, it has yet to be reliably established if the direction and magnitude of this lateral bias varies along the vertical plane. Here, participants were required to distribute their attention equally across a checkerboard array spanning the entire visual field in order to detect transient targets that appeared at unpredictable locations. Reaction times (RTs) were faster to left hemifield targets in the lower visual field but the opposite trend was observed for targets in the upper field. Electroencephalogram (EEG) analyses focused on the interval prior to target onset in order to identify endogenous neural correlates of these behavioral asymmetries. The relative hemispheric distribution of pre-target oscillatory alpha power was predictive of RT bias to targets in the lower visual field but not the upper field, indicating separate attentional mechanisms for the upper and lower visual fields. Analysis of multifocal visual-evoked potentials (MVEP) in the pre-target interval also indicated that the opposing upper and lower field asymmetries may impact on the magnitude of primary visual cortical responses. These results provide new evidence of a functional segregation of upper and lower field visuospatial processing. |
Matthew W. Lowder; Peter C. Gordon The manuscript that we finished: Structural separation reduces the cost of complement coercion Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 2, pp. 526–540, 2015. @article{Lowder2015a, Two eye-tracking experiments examined the effects of sentence structure on the processing of complement coercion, in which an event-selecting verb combines with a complement that represents an entity (e.g., began the memo). Previous work has demonstrated that these expressions impose a processing cost, which has been attributed to the need to type-shift the entity into an event in order for the sentence to be interpretable (e.g., began writing the memo). Both experiments showed that the magnitude of the coercion cost was reduced when the verb and complement appeared in separate clauses (e.g., The memo that was begun by the secretary; What the secretary began was the memo) compared with when the constituents appeared together in the same clause. The moderating effect of sentence structure on coercion is similar to effects that have been reported for the processing of 2 other types of semantically complex expressions (inanimate subject–verb integration and metonymy). We propose that sentence structure influences the depth at which complex semantic relationships are computed. When the constituents that create the need for a complex semantic interpretation appear in a single clause, readers experience processing difficulty stemming from the need to detect or resolve the semantic mismatch. In contrast, the need to engage in additional processing is reduced when the expression is established across a clause boundary or other structure that deemphasizes the complex relationship. |
Matthew W. Lowder; Peter C. Gordon Natural forces as agents: Reconceptualizing the animate-inanimate distinction Journal Article In: Cognition, vol. 136, pp. 85–90, 2015. @article{Lowder2015b, Research spanning multiple domains of psychology has demonstrated preferential processing of animate as compared to inanimate entities-a pattern that is commonly explained as due to evolutionarily adaptive behavior. Forces of nature represent a class of entities that are semantically inanimate but which behave as if they are animate in that they possess the ability to initiate movement and cause actions. We report an eye-tracking experiment demonstrating that natural forces are processed like animate entities during online sentence processing: they are easier to integrate with action verbs than instruments, and this effect is mediated by sentence structure. The results suggest that many cognitive and linguistic phenomena that have previously been attributed to animacy may be more appropriately attributed to perceived agency. To the extent that this is so, the cognitive potency of animate entities may not be due to vigilant monitoring of the environment for unpredictable events as argued by evolutionary psychologists but instead may be more adequately explained as reflecting a cognitive and linguistic focus on causal explanations that is adaptive because it increases the predictability of events. |
Matthew W. Lowder; Peter C. Gordon Focus takes time: Structural effects on reading Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 6, pp. 1733–1738, 2015. @article{Lowder2015, Previous eye-tracking work has yielded inconsistent evidence regarding whether readers spend more or less time encoding focused information compared with information that is not focused. We report the results of an eye-tracking experiment that used syntactic structure to manipulate whether a target word was linguistically defocused, neutral, or focused, while controlling for possible oculomotor differences across conditions. As the structure of the sentence made the target word increasingly more focused, reading times systematically increased. We propose that the longer reading times for linguistically focused words reflect deeper encoding, which explains previous findings showing that readers have better subsequent memory for focused versus defocused information. |
Steven G. Luke; Kiel Christianson Predicting inflectional morphology from context Journal Article In: Language, Cognition and Neuroscience, vol. 30, no. 6, pp. 735–748, 2015. @article{Luke2015, The present studies investigated the influence of the semantic and syntactic predictability of an inflectional morpheme on word recognition and morphological processing. In two eye-tracking experiments, we examined the effect of syntactic and semantic context on the processing of letter transpositions in inflected words. Participants experienced greater and earlier disruption from cross-morpheme letter transpositions when target verbs appeared in a context that syntactically predicted the presence of a past-tense suffix. Further, internal transpositions caused greater and earlier disruption even in monomorphemic verbs when syntactic context created an expectation of morphological complexity. No effect of semantic predictability was observed, potentially because the semantic manipulation was insufficiently strong. The results reveal that syntactic contexts typical of most English sentences can lead readers to make predictions about the morphological structure of upcoming words. |
Steven G. Luke; John M. Henderson; Fernanda Ferreira Children's eye-movements during reading reflect the quality of lexical representations: An individual differences approach Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 6, pp. 1675–1683, 2015. @article{Luke2015a, The lexical quality hypothesis (Perfetti & Hart, 2002) suggests that skilled reading requires high-quality lexical representations. In children, these representations are still developing, and it has been suggested that this development leads to more adult-like eye-movement behavior during the reading of connected text. To test this idea, a set of young adolescents (aged 11-13 years) completed a standardized measure of lexical quality and then participated in 3 eye-movement tasks: reading, scene search, and pseudoreading. The richness of participants' lexical representations predicted a variety of eye-movement behaviors in reading. Further, the influence of lexical quality was domain specific: Fixation durations in reading diverged from the other tasks as lexical quality increased. These findings suggest that eye movements become increasingly tuned to written language processing as lexical representations become more accurate and detailed. |
Thomas Zhihao Luo; John H. R. Maunsell Neuronal modulations in visual cortex are associated with only one of multiple components of attention Journal Article In: Neuron, vol. 86, no. 5, pp. 1182–1188, 2015. @article{Luo2015, Neuronal signals related to visual attention are found in widespread brain regions, and these signals are generally assumed to participate in a common mechanism of attention. However, the behavioral effects of attention in detection can be separated into two distinct components: spatially selective shifts in either the criterion or sensitivity of the subject. Here we show that a paradigm used by many single-neuron studies of attention conflates behavioral changes in the subject's criterion and sensitivity. Then, using a task designed to dissociate these two components, we found that multiple aspects of attention-related neuronal modulations in area V4 of monkey visual cortex corresponded to behavioral shifts in sensitivity, but not criterion. This result suggests that separate components of attention are associated with signals in different brain regions and that attention is not a unitary process in the brain, but instead consists of distinct neurobiological mechanisms. Luo and Maunsell show that the neuronal modulations in visual cortex correspond to only one of multiple components of attention. This result suggests that different brain structures underlie separate mechanisms of attention and that attention is not a unitary process in the brain, but instead consists of distinct neurobiological mechanisms. |
Yan Luo; Ming Jiang; Yongkang Wong; Qi Zhao Multi-camera saliency Journal Article In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 10, pp. 2057–2070, 2015. @article{Luo2015a, A significant body of literature on saliency modeling predicts where humans look in a single image or video. Besides the scientific goal of understanding how information is fused from multiple visual sources to identify regions of interest in a holistic manner, there are tremendous engineering applications of multi-camera saliency due to the widespread of cameras. This paper proposes a principled framework to smoothly integrate visual information from multiple views to a global scene map, and to employ a saliency algorithm incorporating high-level features to identify the most important regions by fusing visual information. The proposed method has the following key distinguishing features compared with its counterparts: (1) the proposed saliency detection is global (salient regions from one local view may not be important in a global context), (2) it does not require special ways for camera deployment or overlapping field of view, and (3) the key saliency algorithm is effective in highlighting interesting object regions though not a single detector is used. Experiments on several data sets confirm the effectiveness of the proposed principled framework. |
Yingyi Luo; Yunyan Duan; Xiaolin Zhou Processing rhythmic pattern during chinese sentence reading: An eye movement study Journal Article In: Frontiers in Psychology, vol. 6, pp. 1881, 2015. @article{Luo2015b, Prosodic constraints play a fundamental role during both spoken sentence comprehension and silent reading. In Chinese, the rhythmic pattern of the verb-object (V-O) combination has been found to rapidly affect the semantic access/integration process during sentence reading (Luo and Zhou, 2010). Rhythmic pattern refers to the combination of words with different syllabic lengths, with certain combinations disallowed (e.g., [2 + 1]; numbers standing for the number of syllables of the verb and the noun respectively) and certain combinations preferred (e.g., [1 + 1] or [2 + 2]). This constraint extends to the situation in which the combination is used to modify other words. A V-O phrase could modify a noun by simply preceding it, forming a V-O-N compound; when the verb is disyllabic, however, the word order has to be O-V-N and the object is preferred to be disyllabic. In this study, we investigated how the reader processes the rhythmic pattern and word order information by recording the reader's eye-movements. We created four types of sentences by crossing rhythmic pattern and word order in compounding. The compound, embedding a disyllabic verb, could be in the correct O-V-N or the incorrect V-O-N order; the object could be disyllabic or monosyllabic. We found that the reader spent more time and made more regressions on and after the compounds when either type of anomaly was detected during the first pass reading. However, during re-reading (after all the words in the sentence have been viewed), less regressive eye movements were found for the anomalous rhythmic pattern, relative to the correct pattern; moreover, only the abnormal rhythmic pattern, not the violated word order, influenced the regressive eye movements. These results suggest that while the processing of rhythmic pattern and word order information occurs rapidly during the initial reading of the sentence, the process of recovering from the rhythmic pattern anomaly may ease the reanalysis processing at the later stage of sentence integration. Thus, rhythmic pattern in Chinese can dynamically affect both local phrase analysis and global sentence integration during silent reading. |
Xiao-Qing Li; Hai-Yan Zhao; Yuan-Yuan Zheng; Yu-Fang Yang Two-stage interaction between word order and noun animacy during online thematic processing of sentences in Mandarin Chinese Journal Article In: Language, Cognition and Neuroscience, vol. 30, no. 5, pp. 555–573, 2015. @article{Li2015, How different sources of linguistic information are used during online language comprehension is a central question in psycholinguistic research. This study used eye-tracking and electrophysiological techniques to investigate how and when word order and noun animacy interact with each other during online thematic processing of Mandarin Chinese sentences. The initial argument in the sentence is animate or inanimate and the following verb disambiguates it as an agent or patient. The results at the verb revealed that, at the early processing stage, the patient-first sentences elicited longer gaze duration and larger N400 than the agent-first ones only when the initial argument was inanimate; however, at the late stage, the patient-first sentences elicited prolonged second-pass time and enhanced P600 only when the initial argument was animate. In addition, the brain oscillations at the verb also showed different patterns in the early and later window latencies. The present results suggested that the online thematic processing of Mandarin Chinese sentences involves not only universal processing strategies (subject-preference) but also language-specific strategies as well. That is, in Mandarin Chinese, noun animacy interacts with word order immediately during online sentence comprehension; the initial processing results can be overridden by additional interpretively relevant information types at a later stage. Those results provided important indications for the language comprehension models. |
Xiaowei Li; Bin Hu; Tingting Xu; Ji Shen; Martyn Ratcliffe A study on EEG-based brain electrical source of mild depressed subjects Journal Article In: Computer Methods and Programs in Biomedicine, vol. 120, no. 3, pp. 135–141, 2015. @article{Li2015a, Background and objective: Several abnormal brain regions are known to be linked to depression, including amygdala, orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), and dorsolateral prefrontal cortex (DLPFC) etc. The aim of this study is to apply EEG (electroencephalogram) data analysis to investigate, with respect to mild depression, whether there exists dysregulation in these brain regions. Methods: EEG sources were assessed from 9 healthy and 9 mildly depressed subjects who were classified according to the Beck Depression Inventory (BDI) criteria. t-Test was used to calculate the eye movement data and standardized low resolution tomography (sLORETA) was used to correlate EEG activity. Results: A comparison of eye movement data between the healthy and mild depressed subjects exhibited that mildly depressed subjects spent more time viewing negative emotional faces. Comparison of the EEG from the two groups indicated higher theta activity in BA6 (Brodmann area) and higher alpha activity in BA38. Conclusions: EEG source location results suggested that temporal pole activity to be dysregulated, and eye-movement data analysis exhibited mild depressed subjects paid much more attention to negative face expressions, which is also in accordance with the results of EEG source location. |
Xingshan Li; Pingping Liu; Keith Rayner Saccade target selection in Chinese reading Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 2, pp. 524–530, 2015. @article{Li2015b, In Chinese reading, there are no spaces to mark the word boundaries, so Chinese readers cannot target their saccades to the center of a word. In this study, we investigated how Chinese readers decide where to move their eyes during reading. To do so, we introduced a variant of the boundary paradigm in which only the target stimulus remained on the screen, displayed at the saccade landing site, after the participant's eyes crossed an invisible boundary. We found that when the saccade target was a word, reaction times in a lexical decision task were shorter when the saccade landing position was closer to the end of that word. These results are consistent with the predictions of a processing-based strategy to determine where to move the eyes. Specifically, this hypothesis assumes that Chinese readers estimate how much information is processed in parafoveal vision and saccade to a location that will carry novel information. |
You Li; Lei Mo; Qi Chen Differential contribution of velocity and distance to time estimation during self-initiated time-to-collision judgment Journal Article In: Neuropsychologia, vol. 73, pp. 35–47, 2015. @article{Li2015c, To successfully intercept/avoid a moving object, human brain needs to precisely estimate the time-to-collision (TTC) of the object. In real life, time estimation is determined conjointly by the velocity and the distance of a moving object. However, surprisingly little is known concerning whether and how the velocity and the distance dimensions contribute differentially to time estimation. In this fMRI study, we demonstrated that variations of velocity evoked substantially different behavioral and neural responses than distance during self-initiated TTC judgments. Behaviorally, the velocity dimension induced a stronger time dilation effect than the distance dimension that participants' responses were significantly more delayed by increasing velocity than by decreasing distance, even with the theoretical TTC being equated between the two conditions. Neurally, activity in the dorsal fronto-parietal TTC network was parametrically modulated by variations in TTC irrespective of whether the variations in TTC were caused by velocity or distance. Importantly, even with spatial distance being equated, increasing velocity induced illusory perception of longer spatial trajectory in early visual cortex. Moreover, as velocity increased, the early visual cortex showed enhanced connectivity with the TTC network. Our results thus implied that with increasing velocity, TTC judgments depended increasingly on the velocity-induced illusory distance information from early visual cortex and was eventually tampered. |
Feifei Liang; Hazel I. Blythe; Chuanli Zang; Xuejun Bai; Guoli Yan; Simon P. Liversedge Positional character frequency and word spacing facilitate the acquisition of novel words during Chinese children's reading Journal Article In: Journal of Cognitive Psychology, vol. 27, no. 5, pp. 594–608, 2015. @article{Liang2015, Children's eye movements were recorded to examine the role of word spacing and positional character frequency on the process of Chinese lexical acquisition during reading. Three types of two-character novel pseudowords were constructed: words containing characters in positions in which they frequently occurred (congruent), words containing characters in positions they do not frequently occur in (incongruent) and words containing characters that do not have a strong position bias (balanced). There were two phases within the experiment, a learning phase and a test phase. There were also two learning groups: half the children read sentences in a word-spaced format and the other half read the sentences in an unspaced format during the learning phase. All the participants read normal, unspaced text at test. A benefit of word spacing was observed in the learning phase, but not at test. Also, facilitatory effects of positional character congruency were found both in the learning and test phase; however, this benefit was greatly reduced at test. Furthermore, we did not find any interaction between word spacing and positional character frequencies, indicating that these two types of cues affect lexical acquisition independently. With respect to theoretical accounts of lexical acquisition, we argue that word spacing might facilitate the very earliest stages of word learning by clearly demarking word boundary locations. In contrast, we argue that characters' positional frequencies might affect relatively later stages of word learning. |
Amy M. Lieberman; Arielle Borovsky; Marla Hatrak; Rachel I. Mayberry Real-time processing of ASL signs: Delayed first language acquisition affects organization of the mental lexicon Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 4, pp. 1130–1139, 2015. @article{Lieberman2015, Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. |
Jung Hyun Lim; Kiel Christianson Second language sensitivity to agreement errors: Evidence from eye movements during comprehension and translation Journal Article In: Applied Psycholinguistics, vol. 36, no. 6, pp. 1283–1315, 2015. @article{Lim2015, The present study addresses the questions of (a) whether Korean learners of English show sensitivity to subject–verb agreement violations in an eye-tracking paradigm, and (b) how reading goals (reading for comprehension vs. translation) and second language (L2) proficiency modulate depth of morphological agreement processing. Thirty-six Korean speakers of L2 English and 32 native English speakers read 40 stimulus sentences, half of which contained subject–verb agreement violations in English. The factors were whether a head and a local intervening noun matched in number and whether a sentence was grammatical or not. In linear mixed models analyses, both agreement violations and noun phrase match/mismatch were found to be disruptive in processing for native speakers at the critical regions (verb and following word), and locally distracting number-marked nouns yielded an asymmetric pattern depending on grammaticality. When L2 speakers were asked to produce offline oral translations of the English sentences into Korean, they became more sensitive to agreement violations. In addition, higher L2 proficiency predicted greater sensitivity to morphological violations. The results indicate that L2 speakers are not necessarily insensitive to morphological violations and that L2 proficiency and task modulate the depth of L2 morphological processing. |
Yi Chun Lin; Tzu Chien Liu; John Sweller In: Computers & Education, vol. 88, pp. 280–291, 2015. @article{Lin2015, Computer simulations were used to teach students basic concepts associated with correlation. Half of the students were presented information in a sequential series of single frames in which each frame replaced the preceding frame while the other half were presented the information in simultaneous multiple frames in which each frame was added to the previous frames without replacement. It was hypothesized that if the isolated elements effect occurs, the single-frame condition should be superior. Alternatively, if the transient information effect dominates, the multiple-frame condition should be superior. Results confirmed the superiority of the single-frame presentation. Eye-tracking indicated that participants who learned with single frames paid more attention to the important representations than participants who learned with multiple frames. |
Sam Ling; Michael S. Pratte; Frank Tong Attention alters orientation processing in the human lateral geniculate nucleus Journal Article In: Nature Neuroscience, vol. 18, no. 4, pp. 496–498, 2015. @article{Ling2015, Orientation selectivity is a cornerstone property of vision, commonly believed to emerge in the primary visual cortex. We found that reliable orientation information could be detected even earlier, in the human lateral geniculate nucleus, and that attentional feedback selectively altered these orientation responses. This attentional modulation may allow the visual system to modify incoming feature-specific signals at the earliest possible processing site. |
Matteo Lisi; Patrick Cavanagh; Marco Zorzi Spatial constancy of attention across eye movements is mediated by the presence of visual objects Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 4, pp. 1159–1169, 2015. @article{Lisi2015, Recent studies have shown that attentional facilitation lingers at the retinotopic coordinates of a previously attended position after an eye movement. These results are intriguing, because the retinotopic location becomes behaviorally irrelevant once the eyes have moved. Critically, in these studies participants were asked to maintain attention on a blank location of the screen. In the present study, we examined whether the continuing presence of a visual object at the cued location could affect the allocation of attention across eye movements. We used a trans-saccadic cueing paradigm in which the relevant positions could be defined or not by visual objects (simple square outlines). We find an attentional benefit at the spatiotopic location of the cue only when the object (the placeholder) has been continuously present at that location. We conclude that the presence of an object at the attended location is a critical factor for the maintenance of spatial constancy of attention across eye movements, a finding that helps to reconcile previous conflicting results. |
Pingping Liu; Xingshan Li; Buxin Han Additive effects of stimulus quality and word frequency on eye movements during Chinese reading Journal Article In: Reading and Writing, vol. 28, no. 2, pp. 199–215, 2015. @article{Liu2015, Eye movements of Chinese readers were recorded for sentences in which high- and low-frequency target words were presented normally or with reduced stimulus quality in two experiments. We found stimulus quality and word frequency produced strong additive effects on fixation durations for target words. The results demonstrate that stimulus quality and word frequency affect different stages of processing (e.g., visual processing and lexical processing). These results are consistent with the findings of previous single-word lexical decision studies, which showed that stimulus quality manipulation primarily affects the early pre- attentive stage of visual processing, whereas word frequency affects lexical pro- cesses. We discuss these findings in terms of the role of stimulus quality in word recognition and in relation to the E-Z Reader model of eye movement control. |
Pingping Liu; Danlu Liu; Buxin Han; Kevin B. Paterson Aging and the optimal viewing position effect in Chinese Journal Article In: Frontiers in Psychology, vol. 6, pp. 1656, 2015. @article{Liu2015a, Substantial evidence indicates that where readers fixate within a word affects the efficiency with which that word is recognized. Indeed, words in alphabetic languages (e.g., English, French) are recognized most efficiently when fixated at their optimal viewing position (OVP), which is near the word center. However, little is known about the effects of fixation location on word recognition in non-alphabetic languages, such as Chinese. Moreover, studies to date have not investigated if effects of fixation location vary across adult age-groups, although it is well-established that older readers experience greater difficulty recognizing words due to visual and cognitive declines. Accordingly, the present research examined OVP effects by young and older adult readers when recognizing Chinese words presented in isolation. Most words in Chinese are formed from two or more logograms called characters and so the present experiment investigated the influence of fixation location on the recognition of 2-, 3-, and 4-character words (and nonwords). The older adults experienced generally greater word recognition difficulty. But whereas the young adults recognized words most efficiently when initially fixating the first character of 2-character words and second character of 3- and 4-character words, the older adults recognized words most efficiently when initially fixating the first character for words of each length. The findings therefore reveal subtle but potentially important adult age differences in the effects of fixation location on Chinese word recognition. Moreover, the similarity in effects for words and nonwords implies a more general age-related change in oculomotor strategy when processing Chinese character-strings. |
Yanping Liu; Erik D. Reichle; Xingshan Li Parafoveal processing affects outgoing saccade length during the reading of Chinese Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 4, pp. 1229–1236, 2015. @article{Liu2015b, Participants' eye movements were measured while reading Chinese sentences in which target-word frequency and the availability of parafoveal processing were manipulated using a gaze-contingent boundary paradigm. The results of this study indicate that preview availability and its interaction with word frequency modulated the length of the saccades exiting the target words, suggesting important functional roles for parafoveal processing in determining where the eyes move during reading. The theoretical significance of these findings is discussed in relation to 2 current models of eye-movement control during reading, both of which assume that saccades are directed toward default targets (e.g., the center of the next unidentified word). A possible method for addressing these limitations (i.e., dynamic attention allocation) is also discussed. |
Zhiya Liu; Xiaohong Song; Carol A. Seger; Peter J. Hills An eye-tracking study of multiple feature value category structure learning: The role of unique features Journal Article In: PLoS ONE, vol. 10, no. 8, pp. e0135729, 2015. @article{Liu2015c, We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting. |
Francesc Llorens; Daniel Sanabria; Florentino Huertas; Enrique Molina; Simon J. Bennett Intense physical exercise reduces overt attentional capture Journal Article In: Journal of Sport and Exercise Psychology, vol. 37, no. 5, pp. 559–564, 2015. @article{Llorens2015, The abrupt onset of a visual stimulus typically results in overt attentional capture, which can be quantified by saccadic eye movements. Here, we tested whether attentional capture following onset of task-irrelevant visual stimuli (new object) is reduced after a bout of intense physical exercise. A group of participants performed a visual search task in two different activity conditions: rest, without any prior effort, and effort, immediately after an acute bout of intense exercise. The results showed that participants exhibited (1) slower reaction time of the first saccade toward the target when a new object was simultaneously presented in the visual field, but only in the rest activity condition, and (2) more saccades to the new object in the rest activity condition than in the effort activity condition. We suggest that immediately after an acute bout of effort, participants improved their ability to inhibit irrelevant (distracting) stimuli. |
Patrick Loesche; Jennifer Wiley; Marcus Hasselhorn How knowing the rules affects solving the Raven Advanced Progressive Matrices Test Journal Article In: Intelligence, vol. 48, pp. 58–75, 2015. @article{Loesche2015, The solution process underlying the Raven Advanced Progressive Matrices (RAPM) has been conceptualized to consist of two subprocesses: rule induction and goal management. Past research has also found a strong relation between measures of working memory capacity and performance on RAPM. The present research attempted to test whether the goal management subprocess is responsible for the relation between working memory capacity and RAPM, using a paradigm where the rules necessary to solve the problems were given to subjects, assuming that it would render rule induction unnecessary. Three experiments revealed that working memory capacity was still strongly related to RAPM performance in the given-rules condition, while in two experiments the correlation in the given-rules condition was significantly higher than in the no-rules condition. Experiment 4 revealed that giving the rules affected problem solving behavior. Evidence from eye tracking protocols suggested that participants in the given-rules condition were more likely to approach the problems with a constructive matching strategy. Two possible mechanisms are discussed that could both explain why providing participants with the rules might increase the relation between working memory capacity and RAPM performance. |
Florian Loffing; Florian Sölter; Norbert Hagemann; Bernd Strauss Accuracy of outcome anticipation, but not gaze behavior, differs against left-and right-handed penalties in team-handball goalkeeping Journal Article In: Frontiers in Psychology, vol. 6, pp. 1820, 2015. @article{Loffing2015a, Low perceptual familiarity with relatively rarer left-handed as opposed to more common right-handed individuals may result in athletes' poorer ability to anticipate the former's action intentions. Part of such left-right asymmetry in visual anticipation could be due to an inefficient gaze strategy during confrontation with left-handed individuals. To exemplify, observers may not mirror their gaze when viewing left- vs. right-handed actions but preferentially fixate on an opponent's right body side, irrespective of an opponent's handedness, owing to the predominant exposure to right-handed actions. So far empirical verification of such assumption, however, is lacking. Here we report on an experiment where team-handball goalkeepers' and non-goalkeepers' gaze behavior was recorded while they predicted throw direction of left- and right-handed 7-m penalties shown as videos on a computer monitor. As expected, goalkeepers were considerably more accurate than non-goalkeepers and prediction was better against right- than left-handed penalties. However, there was no indication of differences in gaze measures (i.e., number of fixations, overall and final fixation duration, time-course of horizontal or vertical fixation deviation) as a function of skill group or the penalty-takers' handedness. Findings suggest that inferior anticipation of left-handed compared to right-handed individuals' action intentions may not be associated with misalignment in gaze behavior. Rather, albeit looking similarly, accuracy differences could be due to observers' differential ability of picking up and interpreting the visual information provided by left- vs. right-handed movements. |
Florian Loffing; Ricarda Stern; Norbert Hagemann Pattern-induced expectation bias in visual anticipation of action outcomes Journal Article In: Acta Psychologica, vol. 161, pp. 45–53, 2015. @article{Loffing2015, When anticipating an opponent's action intention, athletes may rely on both kinematic and contextual cues. Here we show that patterns of previous action outcomes (i.e., a contextual cue) bias visual anticipation of action outcome in subsequent trials. In two video-based experiments, skilled players and novices were presented with volleyball attacks stopping 360. ms (Exp. 1) or 280. ms (Exp. 2) before an attacker's hand-ball-contact and they were asked to predict the type of attack (smash or lob). Attacks were presented block-wise with six attacks per block. The fifth trial served as target trial where we presented identical attacks to control kinematic cues. We varied the outcomes of the preceding four attacks under three conditions: lobs only, smashes only or an alternating pattern of attack outcomes. In Exp. 1, skilled players but not novices were less accurate and responded later in target trials that were incongruent vs. congruent with preceding patterns. In Exp. 2, where the task was easier, another group of novices demonstrated a similar congruence effect for accuracy but not response time. Collectively, findings indicate that participants tended to preferentially expect the continuation of an attack pattern, while possibly attaching less importance to kinematic cues. Thus, overreliance on pattern continuation may be detrimental to anticipation in situations an action's outcome does not correspond to the pattern. From a methodological viewpoint, comparison of novices' performance in Exp. 1 and 2 suggests that task difficulty may be critical as to whether contextual cue effects can be identified in novices. |
Kaitlin E. W. Laidlaw; Thariq A. Badiudeen; Mona J. H. Zhu; Alan Kingstone A fresh look at saccadic trajectories and task irrelevant stimuli: Social relevance matters Journal Article In: Vision Research, vol. 111, pp. 82–90, 2015. @article{Laidlaw2015, A distractor placed nearby a saccade target will cause interference during saccade planning and execution, and as a result will cause the saccade's trajectory to curve in a systematic way. It has been demonstrated that making a distractor more task-relevant, for example by increasing its similarity to the target, will increase the interference it imposes on the saccade and generate more deviant saccadic trajectories. Is the extent of a distractor's interference within the oculomotor system limited to its relevance to a particular current task, or can a distractor's general real-world meaning influence saccade trajectories even when it is made irrelevant within a task? Here, it is tested whether a task-irrelevant distractor can influence saccade trajectory if it depicts a stimulus that is normally socially relevant. Participants made saccades to a target object while also presented with a task-irrelevant (upright or inverted) face, or scrambled non-face equivalent. Results reveal that a distracting face creates greater deviation in saccade trajectory than does a non-face distractor, most notably at longer saccadic reaction times. These results demonstrate the sensitivity of processing that distractors are afforded by the oculomotor system, and support the view that distractor relevance beyond the task itself can also influence saccade planning and execution. |
Markus Lappe; Fred H. Hamker Peri-saccadic compression to two locations in a two-target choice saccade task Journal Article In: Frontiers in Systems Neuroscience, vol. 9, pp. 135, 2015. @article{Lappe2015, When visual stimuli are presented at the onset of a saccadic eye movement they are seen compressed onto the target location of the saccade. This peri-saccadic compression is believed to result from internal feedback pathways between oculomotor and visual areas of the brain. This feedback enhances vision around the saccade target at the expense of localization ability in other regions of the visual field. Although saccades can be targeted at only one object at a time, often multiple potential targets are available in a visual scene, and the oculomotor system has to choose which target to look at. If two targets are available, preparatory activity builds-up at both target locations in oculomotor maps. Here we show that, in this situation, two foci of compression develop, independent of which of the two targets is eventually chosen for the saccade. Our results suggest that theories that use oculomotor feedback as efference copy signals for upcoming eye movements should take the possibility into account that multiple feedback signals from potential targets may occur in parallel before the execution of a saccade. |
Linnéa Larsson; Marcus Nyström; Richard Andersson; Martin Stridh Detection of fixations and smooth pursuit movements in high-speed eye-tracking data Journal Article In: Biomedical Signal Processing and Control, vol. 18, pp. 145–152, 2015. @article{Larsson2015, A novel algorithm for the detection of fixations and smooth pursuit movements in high-speed eye-tracking data is proposed, which uses a three-stage procedure to divide the intersaccadic intervals into a sequence of fixation and smooth pursuit events. The first stage performs a preliminary segmentation while the latter two stages evaluate the characteristics of each such segment and reorganize the preliminary segments into fixations and smooth pursuit events. Five different performance measures are calculated to investigate different aspects of the algorithm's behavior. The algorithm is compared to the current state-of-the-art (I-VDT and the algorithm in [11]), as well as to annotations by two experts. The proposed algorithm performs considerably better (average Cohen's kappa 0.42) than the I-VDT algorithm (average Cohen's kappa 0.20) and the algorithm in [11] (average Cohen's kappa 0.16), when compared to the experts' annotations. |
Amandine Lassalle; Roxane J. Itier Autistic traits influence gaze-oriented attention to happy but not fearful faces Journal Article In: Social Neuroscience, vol. 10, no. 1, pp. 70–88, 2015. @article{Lassalle2015, The relationship between autistic traits and gaze-oriented attention to fearful and happy faces was investigated at the behavioral and neuronal levels. Upright and inverted dynamic face stimuli were used in a gaze-cueing paradigm while event related potentials (ERPs) were recorded. Participants responded faster to gazed-at than to non-gazed-at targets, and this gaze orienting effect (GOE) diminished with inversion, suggesting it relies on facial configuration. It was also larger for fearful than happy faces but only in participants with high autism-spectrum quotient (AQ) scores. While the GOE to fearful faces was of similar magnitude regardless of AQ scores, a diminished GOE to happy faces was found in participants with high AQ scores. At the ERP level, a congruency effect on target-elicited P1 component reflected enhanced visual processing of gazed-at targets. In addition, cue-triggered early directing attention negativity and anterior directing attention negativity reflected, respectively, attention orienting and attention holding at gazed-at locations. These neural markers of spatial attention orienting were not modulated by emotion and were not found in participants with high AQ scores. Together, these findings suggest that autistic traits influence attention orienting to gaze and its modulation by social emotions such as happiness. |
Jochen Laubrock; Reinhold Kliegl The eye-voice span during reading aloud Journal Article In: Frontiers in Psychology, vol. 6, pp. 1432, 2015. @article{Laubrock2015, Although eye movements during reading are modulated by cognitive processing demands, they also reflect visual sampling of the input, and possibly preparation of output for speech or the inner voice. By simultaneously recording eye movements and the voice during reading aloud, we obtained an output measure that constrains the length of time spent on cognitive processing. Here we investigate the dynamics of the eye-voice span (EVS), the distance between eye and voice. We show that the EVS is regulated immediately during fixation of a word by either increasing fixation duration or programming a regressive eye movement against the reading direction. EVS size at the beginning of a fixation was positively correlated with the likelihood of regressions and refixations. Regression probability was further increased if the EVS was still large at the end of a fixation: if adjustment of fixation duration did not sufficiently reduce the EVS during a fixation, then a regression rather than a refixation followed with high probability. We further show that the EVS can help understand cognitive influences on fixation duration during reading: in mixed model analyses, the EVS was a stronger predictor of fixation durations than either word frequency or word length. The EVS modulated the influence of several other predictors on single fixation durations (SFDs). For example, word-N frequency effects were larger with a large EVS, especially when word N−1 frequency was low. Finally, a comparison of SFDs during oral and silent reading showed that reading is governed by similar principles in both reading modes, although EVS maintenance and articulatory processing also cause some differences. In summary, the EVS is regulated by adjusting fixation duration and/or by programming a regressive eye movement when the EVS gets too large. Overall, the EVS appears to be directly related to updating of the working memory buffer during reading. |
Phillip C. F Law; Bryan K. Paton; Jacqueline A. Riddiford; Caroline T. Gurvich; Trung T. Ngo; Steven M. Miller No relationship between binocular rivalry rate and eye-movement profiles in healthy individuals: A Bayes factor analysis Journal Article In: Perception, vol. 44, no. 5, pp. 643–661, 2015. @article{Law2015, Binocular rivalry (BR) is an intriguing phenomenon in which conflicting images are presented, one to each eye, resulting in perceptual alternations between each image. The rate of BR has been proposed as a potential endophenotype for bipolar disorder because (a) it is well established that this highly heritable psychiatric condition is associated with slower BR rate than in controls, and (b) an individual's BR rate is approximately 50% genetically determined. However, eye movements (EMs) could potentially account for the slow BR trait given EM anomalies are observed in psychiatric populations, and there has been report of an association between saccadic rate and BR rate in healthy individuals. Here, we sought to assess the relationship between BR rate and EMs in healthy individuals (N ¼ 40, mean age ¼ 34.4) using separate BR and EM tasks, with the latter measuring saccades during anticipatory, antisaccade, prosaccade, self-paced, free-viewing, and smooth-pursuit tasks. No correlation was found between BR rate and any EM measure for any BR task (p >.01) with substantial evidence favoring this lack of association (BF01 > 3). This finding is in contrast to previous data and has important implications for using BR rate as an endophenotype. If replicated in clinical psychiatric populations, EM interpretations of the slow BR trait can be excluded. |
James Lee; Jessica Manousakis; Joanne Fielding; Clare Anderson Alcohol and sleep restriction combined reduces vigilant attention, whereas sleep restriction alone enhances distractibility Journal Article In: Sleep, vol. 38, no. 5, pp. 765–775, 2015. @article{Lee2015a, STUDY OBJECTIVES: Alcohol and sleep loss are leading causes of motor vehicle crashes, whereby attention failure is a core causal factor. Despite a plethora of data describing the effect of alcohol and sleep loss on vigilant attention, little is known about their effect on voluntary and involuntary visual attention processes. DESIGN: Repeated-measures, counterbalanced design. SETTING: Controlled laboratory setting. PARTICIPANTS: Sixteen young (18-27 y; M = 21.90 ± 0.60 y) healthy males. INTERVENTIONS: Participants completed an attention test battery during the afternoon (13:00-14:00) under four counterbalanced conditions: (1) baseline; (2) alcohol (0.05% breath alcohol concentration); (3) sleep restriction (02:00-07:00); and (4) alcohol/sleep restriction combined. This test battery included a Psychomotor Vigilance Task (PVT) as a measure of vigilant attention, and two ocular motor tasks-visually guided and antisaccade-to measure the involuntary and voluntary allocation of visual attention. MEASUREMENTS AND RESULTS: Only the combined condition led to reductions in vigilant attention characterized by slower mean reaction time, fastest 10% responses, and increased number of lapses (P < 0.05) on the PVT. In addition, the combined condition led to a slowing in the voluntary allocation of attention as reflected by increased antisaccade latencies (P < 0.05). Sleep restriction alone however increased both antisaccade inhibitory errors [45.8% errors versus < 28.4% all others; P < 0.001] and the involuntary allocation of attention, as reflected by faster visually guided latencies (177.7 msec versus > 185.0 msec all others) to a peripheral target (P < 0.05). CONCLUSIONS: Our data reveal specific signatures for sleep related attention failure: the voluntary allocation of attention is impaired, whereas the involuntary allocation of attention is enhanced. This provides key evidence for the role of distraction in attention failure during sleep loss. |
Jiyeon Lee; Cynthia K. Thompson Phonological facilitation effects on naming latencies and viewing times during noun and verb naming in agrammatic and anomic aphasia Journal Article In: Aphasiology, vol. 29, no. 10, pp. 1164–1188, 2015. @article{Lee2015, Background: Phonological priming has been shown to facilitate naming in$backslash$nindividuals with aphasia, as well as healthy speakers, resulting in$backslash$nfaster naming latencies. However, the mechanisms of phonological$backslash$nfacilitation (PF) in aphasia remain unclear.Aims: Within discrete vs.$backslash$ninteractive models of lexical access, this study examined whether PF$backslash$noccurs via the sub-lexical or lexical route during noun and verb naming$backslash$nin agrammatic and anomic aphasia.Methods & Procedures: Thirteen$backslash$nparticipants with agrammatic aphasia and 10 participants with anomic$backslash$naphasia and their young and age-matched controls (n=20/each) were$backslash$ntested. Experiment 1 examined noun and verb naming deficit patterns in$backslash$nan off-line confrontation naming task. Experiment 2 examined PF effects$backslash$non naming both word categories using eyetracking priming$backslash$nparadigm.Outcomes &Results: Results of Experiment 1 showed greater$backslash$nnaming difficulty for verbs than for nouns in the agrammatic group, with$backslash$nno difference between the two word categories in the anomic group. For$backslash$nboth participant groups, errors were dominated by semantic paraphasias,$backslash$nindicating impaired lexical selection. In the phonological priming task$backslash$n(Experiment 2), young and age-matched control groups showed PF in both$backslash$nnoun and verb naming. Interestingly, the agrammatic group showed PF when$backslash$nnaming verbs, but not nouns, whereas the anomic group showed PF for$backslash$nnouns only.Conclusions: Consistent with lexically mediated PF in$backslash$ninteractive models of lexical access, selective PF for different word$backslash$ncategories in our agrammatic and anomic groups suggest that phonological$backslash$nprimes facilitate lexical selection via feedback activation, resulting$backslash$nin greater PF for more difficult (i.e., verbs in agrammatic and possibly$backslash$nnouns in anomic group) lexical items. |
Dingcai Cao; Nathaniel Nicandro; Pablo A. Barrionuevo A five-primary photostimulator suitable for studying intrinsically photosensitive retinal ganglion cell functions in humans Journal Article In: Journal of Vision, vol. 15, no. 1, pp. 1–14, 2015. @article{Cao2015, Intrinsically photosensitive retinal ganglion cells (ipRGCs) can respond to light directly through self-contained photopigment, melanopsin. IpRGCs also receive synaptic inputs from rods and cones. Thus, studying ipRGC functions requires a novel photostimulating method that can account for all of the photoreceptor inputs. Here, we introduced an inexpensive LED-based five-primary photostimulator that can control the excitations of rods, S-, M-, L-cones, and melanopsin-containing ipRGCs in humans at constant background photoreceptor excitation levels, a critical requirement for studying the adaptation behavior of ipRGCs with rod, cone, or melanopsin input. We described the theory and technical aspects (including optics, electronics, software, and calibration) of the five-primary photostimulator. Then we presented two preliminary studies using the photostimulator we have implemented to measure melanopsin-mediated pupil responses and temporal contrast sensitivity function (TCSF). The results showed that the S-cone input to pupil responses was antagonistic to the L-, M- or melanopsin inputs, consistent with an S-OFF and (L + M)-ON response property of primate ipRGCs (Dacey et al., 2005). In addition, the melanopsin-mediated TCSF had a distinctive pattern compared with L + M or S-cone mediated TCSF. Other than controlling individual photoreceptor excitation independently, the five-primary photostimulator has the flexibility in presenting stimuli modulating any combination of photoreceptor excitations, which allows researchers to study the mechanisms by which ipRGCs combine various photoreceptor inputs. |
Kathleen Carbary; Meredith Brown; Christine Gunlogson; Joyce M. McDonough; Aleksandra Fazlipour; Michael K. Tanenhaus Anticipatory deaccenting in language comprehension Journal Article In: Language, Cognition and Neuroscience, vol. 30, no. 1-2, pp. 197–211, 2015. @article{Carbary2015, We evaluated the hypothesis that listeners can generate expectations about upcoming input using anticipatory deaccenting, in which the absence of a nuclear pitch accent on an utterance-new noun is licensed by the subsequent repetition of that noun (e.g. Drag the SQUARE with the house to the TRIangle with the house). The phonemic restoration paradigm was modified to obscure word-initial segmental information uniquely identifying the final word in a spoken instruction, resulting in a stimulus compatible with two lexical alternatives (e.g. mouse/house). In Experiment 1, we measured participants' final interpretations and response times. In Experiment 2, we used the same materials in a crowd-sourced gating study. Sentence interpretations at gated intervals, final interpretations and response times provided converging evidence that the anticipatory deaccenting pattern contributed to listeners' referential expectations. The results illustrate the availability and importance of sentence-level accent patterns in spoken language comprehension.$backslash$nWe evaluated the hypothesis that listeners can generate expectations about upcoming input using anticipatory deaccenting, in which the absence of a nuclear pitch accent on an utterance-new noun is licensed by the subsequent repetition of that noun (e.g. Drag the SQUARE with the house to the TRIangle with the house). The phonemic restoration paradigm was modified to obscure word-initial segmental information uniquely identifying the final word in a spoken instruction, resulting in a stimulus compatible with two lexical alternatives (e.g. mouse/house). In Experiment 1, we measured participants' final interpretations and response times. In Experiment 2, we used the same materials in a crowd-sourced gating study. Sentence interpretations at gated intervals, final interpretations and response times provided converging evidence that the anticipatory deaccenting pattern contributed to listeners' referential expectations. The results illustrate the availability and importance of sentence-level accent patterns in spoken language comprehension. |
Christophe Carlei; Dirk Kerzel The effect of gaze direction on the different components of visuo-spatial short-term memory Journal Article In: Laterality, vol. 20, no. 6, pp. 738–754, 2015. @article{Carlei2015, Cerebral asymmetries and cortical regions associated with the upper and lower visual field were investigated using shifts of gaze. Earlier research suggests that gaze shifts to the left or right increase activation of specific areas of the contralateral hemisphere. We asked whether looking at one quadrant of the visual field facilitates the recall in various visuo-spatial tasks. The different components of visuo-spatial memory were investigated by probing memory for a stimulus matrix in each quadrant of the screen. First, memory for visual images or patterns was probed with a matrix of squares that was simultaneously presented and had to be reconstructed by mouse click. Better memory performance was found in the upper left quadrant compared to the three other quadrants indicating that both laterality and elevation are important. Second, positional memory was probed by subsequently presenting squares which prevented the formation of a visual image. Again, we found that gaze to the upper left facilitated performance. Third, memory for object-location binding was probed by asking observers to associate objects to particular locations. Higher performance was found with gaze directed to the lower quadrants irrespective of lateralization, confirming that only some components of visual short-term memory have shared neural substrates.; |
Nathan Caruana; Jon Brock; Alexandra Woolgar A frontotemporoparietal network common to initiating and responding to joint attention bids Journal Article In: NeuroImage, vol. 108, pp. 34–46, 2015. @article{Caruana2015, Joint attention is a fundamental cognitive ability that supports daily interpersonal relationships and communication. The Parallel Distributed Processing model (PDPM) postulates that responding to (RJA) and initiating (IJA) joint attention are predominantly supported by posterior-parietal and frontal regions respectively. It also argues that these neural networks integrate during development, supporting the parallel processes of self- and other-attention representation during interactions. However, direct evidence for the PDPM is limited due to a lack of ecologically valid experimental paradigms that can capture both RJA and IJA. Building on existing interactive approaches, we developed a virtual reality paradigm where participants engaged in an online interaction to complete a cooperative task. By including tightly controlled baseline conditions to remove activity associated with non-social task demands, we were able to directly contrast the neural correlates of RJA and IJA to determine whether these processes are supported by common brain regions. Both RJA and IJA activated broad frontotemporoparietal networks. Critically, a conjunction analysis identified that a subset of these regions were common to both RJA and IJA. This right-lateralised network included the dorsal portion of the middle frontal gyrus (MFG), inferior frontal gyrus (IFG), middle temporal gyrus (MTG), precentral gyrus, posterior superior temporal sulcus (pSTS), temporoparietal junction (TPJ) and precuneus. Additional activation was observed in this network for IJA relative to RJA at MFG, IFG, TPJ and precuneus. This is the first imaging study to directly investigate the neural correlates common to RJA and IJA engagement, and thus support the assumption that a broad integrated network underlies the parallel aspects of both initiating and responding to joint attention. |
Nathan Caruana; Peter Lissa; Genevieve McArthur The neural time course of evaluating self-initiated joint attention bids Journal Article In: Brain and Cognition, vol. 98, pp. 43–52, 2015. @article{Caruana2015a, Background: During interactions with other people, we constantly evaluate the significance of our social partner's gaze shifts in order to coordinate our behaviour with their perspective. In this study, we used event-related potentials (ERPs) to investigate the neural time course of evaluating gaze shifts that signal the success of self-initiated joint attention bids. Method: Nineteen participants were allocated to a "social" condition, in which they played a cooperative game with an anthropomorphic virtual character whom they believed was controlled by a human partner in a nearby laboratory. Participants were required to initiate joint attention towards a target. In response, the virtual partner shifted his gaze congruently towards the target - thus achieving joint attention - or incongruently towards a different location. Another 19 participants completed the same task in a non-social "control" condition, in which arrows, believed to be controlled by a computer program, pointed at a location that was either congruent or incongruent with the participant's target fixation. Results: In the social condition, ERPs to the virtual partner's incongruent gaze shifts evoked significantly larger P350 and P500 peaks compared to congruent gaze shifts. This P350 and P500 morphology was absent in both the congruent and incongruent control conditions. Discussion: These findings are consistent with previous claims that gaze shifts differing in their social significance modulate central-parietal ERPs 350. ms following the onset of the gaze shift. Our control data highlights the social specificity of the observed P350 effect, ruling out explanations pertaining to attention modulation or error detection. |
Michele Cascardi; Davine Armstrong; Leeyup Chung; Denis Pare Pupil response to threat in trauma-exposed individuals with or without PTSD Journal Article In: Journal of Traumatic Stress, vol. 28, pp. 370–374, 2015. @article{Cascardi2015, An infrequently studied and potentially promising physiological marker for posttraumatic stress disorder (PTSD) is pupil response. This study tested the hypothesis that pupil responses to threat would be significantly larger in trauma-exposed individuals with PTSD compared to those without PTSD. Eye-tracking technology was used to evaluate pupil response to threatening and neutral images. Recruited for participation were 40 trauma-exposed individuals; 40.0% (n = 16) met diagnostic criteria for PTSD. Individuals with PTSD showed significantly more pupil dilation to threat-relevant stimuli compared to the neutral elements (Cohen's d = 0.76), and to trauma-exposed trauma, cumulative violence exposure, and trait anxiety were statistically adjusted. The final logistic regression model was associated with controls (Cohen's d = 0.75). Pupil dilation significantly accounted for 12% of variability in PTSD after time elapsed since most recent 85% of variability in PTSD status and correctly classified 93.8% of individuals with PTSD and 95.8% of those without. Pupil reactivity showed promise as a physiological marker for PTSD. |
Julia Boggia; Jelena Ristic Social event segmentation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 68, no. 4, pp. 731–744, 2015. @article{Boggia2015, Humans are experts in understanding social environments. What perceptual and cognitive processes enable such competent evaluation of social information? Here we show that environmental content is grouped into units of "social perception", which are formed automatically based on the attentional priority given to social information conveyed by eyes and faces. When asked to segment a clip showing a typical daily scenario, participants were remarkably consistent in identifying the boundaries of social events. Moreover, at those social event boundaries, participants' eye movements were reliably directed to actors' eyes and faces. Participants' indices of attention measured during the initial passive viewing, reflecting natural social behaviour, also showed a remarkable correspondence with overt social segmentation behaviour, reflecting the underlying perceptual organization. Together, these data show that dynamic information is automatically organized into meaningful social events on an ongoing basis, strongly suggesting that the natural comprehension of social content in daily life might fundamentally depend on this underlying grouping process. |
Jens Bölte; Andrea Böhl; Christian Dobel; Pienie Zwitserlood In: Frontiers in Psychology, vol. 6, pp. 1540, 2015. @article{Boelte2015, In three experiments, participants named target pictures by means of German compound words (e.g., Gartenstuhl-garden chair), each accompanied by two different distractor pictures (e.g., lawn mower and swimming pool). Targets and distractor pictures were semantically related either associatively (garden chair and lawn mower) or by a shared semantic category (garden chair and wardrobe). Within each type of semantic relation, target and distractor pictures either shared morpho-phonological (word-form) information (Gartenstuhl with Gartenzwerg, garden gnome, and Gartenschlauch, garden hose) or not. A condition with two completely unrelated pictures served as baseline. Target naming was facilitated when distractor and target pictures were morpho-phonologically related. This is clear evidence for the activation of word-form information of distractor pictures. Effects were larger for associatively than for categorically related distractors and targets, which constitute evidence for lexical competition. Mere categorical relatedness, in the absence of morpho-phonological overlap, resulted in null effects (Experiments 1 and 2), and only speeded target naming when effects reflect only conceptual, but not lexical processing (Experiment 3). Given that distractor pictures activate their word forms, the data cannot be easily reconciled with discrete serial models. The re sults fit well with models that allow information to cascade forward from conceptual to word-form levels. |
Yoram Bonneh; Yael Adini; Uri Polat Contrast sensitivity revealed by microsaccades Journal Article In: Journal of vision, vol. 15, no. 9, pp. 1–12, 2015. @article{Bonneh2015, Microsaccades are small rapid and involuntary eye movements that occur during fixation in an apparently stochastic manner. They are known to be inhibited in response to sensory transients, with a time course that depends on the stimulus parameters and attention. However, the temporal precision of their onsets and the degree to which they can be used to assess the response of the visual system to basic stimulus parameters is currently unknown. Here we studied microsaccade response properties as a function of the contrast and spatial frequency of visual onsets. Observers (n¼ 18) viewed and silently counted 2-min sequences of Gabor patches presented briefly (100 ms) at 1 Hz. Contrast and spatial frequency were randomized in different experiments. We found that the microsaccade response time, as measured by the latency of the first microsaccade relative to stimulus onset following its release from inhibition, was sensitive to the contrast and spatial frequency of the stimulus and could be used to extract a contrast response function without the observers' response. We also found that contrast detection thresholds, measured behaviorally for different spatial frequencies, were highly and positively correlated (R = 0.87) with the microsaccade response time measured at high contrast (.4 times the threshold). These results show that different measures of microsaccade inhibition, especially the microsaccade response time, can provide accurate and involuntary measures of low-level visual properties such as contrast response and sensitivity. |
Ali Borji; Andreas Lennartz; Marc Pomplun What do eyes reveal about the mind? Algorithmic inference of search targets from fixations Journal Article In: Neurocomputing, vol. 149, pp. 788–799, 2015. @article{Borji2015, We address the question of inferring the search target from fixation behavior in visual search. Such inference is possible since during search, our attention and gaze are guided toward visual features similar to those in the search target. We strive to answer two fundamental questions: what are the most powerful algorithmic principles for this task, and how does their performance depend on the amount of available eye movement data and the complexity of the target objects? In the first two experiments, we choose a random-dot search paradigm to eliminate contextual influences on search. We present an algorithm that correctly infers the target pattern up to 50 times as often as a previously employed method and promises sufficient power and robustness for interface control. Moreover, the current data suggest a principal limitation of target inference that is crucial for interface design: if the target pattern exceeds a certain spatial complexity level, only a subpattern tends to guide the observers' eye movements, which drastically impairs target inference. In the third experiment, we show that it is possible to predict search targets in natural scenes using pattern classifiers and classic computer vision features significantly above chance. The availability of compelling inferential algorithms could initiate a new generation of smart, gaze-controlled interfaces and wearable visual technologies that deduce from their users' eye movements the visual information for which they are looking. In a broader perspective, our study shows directions for efficient intent decoding from eye movements. |
Tobias Bormann; Sascha A. Wolfer; Wibke Hachmann; Claudia Neubauer; Lars Konieczny Fast word reading in pure alexia: “Fast, yet serial” Journal Article In: Neurocase, vol. 21, no. 2, pp. 251–267, 2015. @article{Bormann2015, Pure alexia is a severe impairment of word reading in which individuals process letters serially with a pronounced length effect. Yet, there is considerable variation in the performance of alexic readers with generally very slow, but also occasionally fast responses, an observation addressed rarely in previous reports. It has been suggested that "fast" responses in pure alexia reflect residual parallel letter processing or that they may even be subserved by an independent reading system. Four experiments assessed fast and slow reading in a participant (DN) with pure alexia. Two behavioral experiments investigated frequency, neighborhood, and length effects in forced fast reading. Two further experiments measured eye movements when DN was forced to read quickly, or could respond faster because words were easier to process. Taken together, there was little support for the proposal that "qualitatively different" mechanisms or reading strategies underlie both types of responses in DN. Instead, fast responses are argued to be generated by the same serial-reading strategy. |
Sabine Born; Eckart Zimmermann; Patrick Cavanagh The spatial profile of mask-induced compression for perception and action Journal Article In: Vision Research, vol. 110, pp. 128–141, 2015. @article{Born2015, Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as saccadic compression of space. We have recently demonstrated that similar mislocalizations of flashed stimuli can be observed in the absence of saccades: brief probes were attracted towards a visual reference when followed by a mask. To examine the spatial profile of this new phenomenon of masked-induced compression, here we used a pair of references that draw the probe into the gap between them. Strong compression was found when we masked the probe and presented it following a reference pair, whereas little or no compression occurred for the probe without the reference pair or without the mask. When the two references were arranged vertically, horizontal mislocalizations prevailed. That is, probes presented to the left or right of the vertically arranged references were "drawn in" to be seen aligned with the references. In contrast, when we arranged the two references horizontally, we found vertical compression for stimuli presented above or below the references. Finally, when participants were to indicate the perceived probe location by making an eye movement towards it, saccade landing positions were compressed in a similar fashion as perceptual judgments, confirming the robustness of mask-induced compression. Our findings challenge pure oculomotor accounts of saccadic compression of space that assume a vital role for saccade-specific signals such as corollary discharge or the updating of eye position. Instead, we suggest that saccade- and mask-induced compression both reflect how the visual system deals with disruptions. |
Annalisa Bosco; Markus Lappe; Patrizia Fattori Fattori Adaptation of saccades and perceived size after trans-saccadic changes of object size Journal Article In: Journal of Neuroscience, vol. 35, no. 43, pp. 14448–14456, 2015. @article{Bosco2015, When saccadic eye movements consistently fail to land on the intended target, saccade accuracy is maintained by gradually adapting the amplitude of successive saccades to the same target. Such saccadic adaptation is usually induced by systematically displacing a small visual target during the execution ofthe saccade. However, saccades are normallyperformed to extended objects. Herewe report changes in saccade amplitudewhenthe size ofa target object is systematicallychanged during a saccade. Moreover,wefind that this manipulation also affected the visual perception ofthe size ofthat object. Human subjects were tested in shortening and lengthening adaptation where they had to make saccades to targets ofdifferent sizes, which were each shortened or lengthened during saccade execution, respectively. In both experiments, a preadaptation and postadaptation phase required manually indicating the horizontal size ofeach target by grip aperture and, in a further experiment, a verbal size report. We evaluated the effect ofchange in visual perception on saccade and on the two modalities ofjudgment.Weobserved that (1) saccadic adaptation can be induced by modifying target object size and (2) this gradual change in saccade amplitude in the direction of the object size change evokes a concomitant change in perceived object size. These findings suggest that size is a relevant signal for saccadic system and its trans-saccadic manipulation entails considerable changes at multiple levels ofsensorimotor performance. |
Oliver Bott; Anja Gattnar The cross-linguistic processing of aspect – an eyetracking study on the time course of aspectual interpretation in Russian and German Journal Article In: Language, Cognition and Neuroscience, vol. 30, no. 7, pp. 877–898, 2015. @article{Bott2015a, This paper reports a cross-linguistic study on the time course of aspectual interpretation in an aspect language (Russian) and a non-aspect language (German). In Russian, mereological semantics led us to expect incremental mismatch detection independently of the presence or absence of the verbal arguments. In German, however, mismatch effects should be delayed until the processor has encountered the complete predication. These predictions were tested in two eyetracking during reading experiments. We investigated the processing of achievement verbs modified by aspectually mismatching adverbials in Russian (Exp. 1) and German (Exp. 2) and manipulated the word order in such a way that the mismatch occurred before or after the predication was complete. The data show that Russian readers immediately noticed the mismatch independently of whether the verb preceded or followed its arguments, whereas German readers showed mismatch effects only after a complete predication. We take this as evidence for cross-linguistically different increment sizes in event interpretation. |
Oliver Bott; Fabian Schlotterbeck The processing domain of scope interaction Journal Article In: Journal of Semantics, vol. 32, no. 1, pp. 39–92, 2015. @article{Bott2015, The present study investigates whether quantifier scope is computed incrementally during online sentence processing. We exploited the free word order in German to manipulate whether the verbal predicate preceded or followed the second quantifier in doubly quantified sentences that required the computation of inverse scope. A pos- sessive pronoun in the first quantifier that had to be bound by the second quantifier was used to enforce scope inversion. We tested whether scope inversion causes diffi- culty and whether this difficulty emerges even at a point before comprehenders have encountered the main verb. We report three pretests and two reading time experi- ments. The first two pretests were offline tests that established (1) that the sentences exhibited the assumed scope preferences and (2) that variable binding forced scope inversion. The third pretest employed self-paced reading to show that interpreting a bound variable is not difficult per se and that difficulty in the critical construction must thus be due to inverting scope. Incremental processing of quantifier scope was inves- tigated in a self-paced reading experiment. We observed difficulty right after the second quantifier, but only if it appeared after the main verb, that is, after the predi- cation was complete. Further evidence for late scope inversion comes from an eye- tracking experiment. Again, a scope inversion effect could only be observed at the end ofthe sentence. Taken together, our study demonstrates that in German inverse scope is only computed at the sentence boundary. 1 |
Meredith Brown; Anne Pier Salverda; Christine Gunlogson; Michael K. Tanenhaus Interpreting prosodic cues in discourse context Journal Article In: Language, Cognition and Neuroscience, vol. 30, no. 1-2, pp. 149–166, 2015. @article{Brown2015b, Two visual-world experiments investigated whether and how quickly discourse-based expectations about the prosodic realization of spoken words modulate interpretation of acoustic- prosodic cues. Experiment 1 replicated effects of segmental lengthening on activation of onset- embedded words (e.g. pumpkin) using resynthetic manipulation of duration and fundamental frequency (F0). In Experiment 2, the same materials were preceded by instructions establishing information-structural differences between competing lexical alternatives (i.e. repeated vs. newly- assigned thematic roles) in critical instructions. Eye-movements generated upon hearing the critical target word revealed a significant interaction between information structure and target- word realization: Segmental lengthening and pitch excursion elicited more fixations to the onset- embedded competitor when the target word remained in the same thematic role, but not when its thematic role changed. These results suggest that information structure modulates the interpretation of acoustic-prosodic cues by influencing expectations about fine-grained acoustic- phonetic properties of the unfolding utterance. |
Michael Browning; Timothy E. Behrens; Gerhard Jocham; Jill X. O'Reilly; Sonia J. Bishop Anxious individuals have difficulty learning the causal statistics of aversive environments Journal Article In: Nature Neuroscience, vol. 18, no. 4, pp. 590–596, 2015. @article{Browning2015, Statistical regularities in the causal structure of the environment enable us to predict the probable outcomes of our actions. Environments differ in the extent to which action-outcome contingencies are stable or volatile. Difficulty in being able to use this information to optimally update outcome predictions might contribute to the decision-making difficulties seen in anxiety. We tested this using an aversive learning task manipulating environmental volatility. Human participants low in trait anxiety matched updating of their outcome predictions to the volatility of the current environment, as predicted by a Bayesian model. Individuals with high trait anxiety showed less ability to adjust updating of outcome expectancies between stable and volatile environments. This was linked to reduced sensitivity of the pupil dilatory response to volatility, potentially indicative of altered norepinephrinergic responsivity to changes in this aspect of environmental information. |
Sarah Brown-Schmidt; Scott H. Fraundorf Interpretation of informational questions modulated by joint knowledge and intonational contours Journal Article In: Journal of Memory and Language, vol. 84, pp. 49–74, 2015. @article{BrownSchmidt2015, We examine processes by which dialogue partners form and use representations of joint knowledge, or common ground, during on-line language processing. Eye-tracked participants interpreted wh-questions that inquired about task-relevant objects during interactive conversation. Some objects were known to both speaker and listener, and thus in common ground, whereas others were only known to the listener, and thus in privileged ground. Questions were produced with a typical, falling intonation (Experiment 1) or with either falling or rising intonation (Experiments 2-3). Unlike the falling contour, the rising contour can indicate a request for clarification about previously mentioned information. Participants interpreted falling-contour questions as asking about privileged-ground objects. By contrast, rising questions elicited more consideration of common-ground objects. Directly comparing questions that were produced during live conversation vs. questions that were pre-recorded revealed that this sensitivity to common vs. privileged ground emerged only during live conversation. Finally, individual difference analyses in all three experiments did not support the claim that individuals fail to take perspective when executive function is limited. Taken together, these findings provide evidence for the on-line integration of perspective and intonation during conversational language processing. The lack of perspective effects in non-interactive settings speaks to the inherently interactive nature of conversational processes. |
Sarah Brown-Schmidt; Agnieszka E. Konopka Processes of incremental message planning during conversation Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 3, pp. 833–843, 2015. @article{BrownSchmidt2015a, Speaking begins with the formulation of an intended preverbal message and linguistic encoding of this information. The transition from thought to speech occurs incrementally, with cascading planning at subsequent levels of production. In this article, we aim to specify the mechanisms that support incremental message preparation. We contrast two hypotheses about the mechanisms responsible for incorporating message-level information into a linguistic plan. According to the Initial Preparation view, messages can be encoded as fluent utterances if all information is ready before speaking begins. By contrast, on the Continuous Incrementality view, messages can be continually prepared and updated throughout the production process, allowing for fluent production even if new information is added to the message while speaking is underway. Testing these hypotheses, eye-tracked speakers in two experiments produced unscripted, conjoined noun phrases with modifiers. Both experiments showed that new message elements can be incrementally incorporated into the utterance even after articulation begins, consistent with a Continuous Incrementality view of message planning, in which messages percolate to linguistic encoding immediately as that information becomes available in the mind of the speaker. We conclude by discussing the functional role of incremental message planning in conversational speech and the situations in which this continuous incremental planning would be most likely to be observed. |
Berno Bucker; Artem V. Belopolsky; Jan Theeuwes Distractors that signal reward attract the eyes Journal Article In: Visual Cognition, vol. 23, no. 1-2, pp. 1–24, 2015. @article{Bucker2015, Salient stimuli and stimuli associated with reward have the ability to attract both attention and the eyes. The current study exploited the effects of reward on the well-known global effect in which two objects appear simultaneously in close spatial proximity. Participants always made saccades to a predefined target, while the colour of a nearby distractor signalled the reward available (high/low) for that trial. Unlike previous reward studies, in the current study these distractors never served as targets. We show that participants made fast saccades towards the target. However, saccades landed significantly closer to the high compared to the low reward signalling distractor. This reward effect was already present in the first block and remained stable throughout the experiment. Instead of landing exactly in between the two stimuli (i.e., the classic global effect), the fastest eye movements landed closer towards the reward signalling distractor. Results of a control experiment, in which no distractor-reward contingencies were present, confirmed that the observed effects were driven by reward and not by physical salience. Furthermore, there were trial-by-trial reward priming effects in which saccades landed significantly closer to the high instead of the low reward signalling distractor when the same distractor was presented on two consecutive trials. Together the results imply that a reward signalling stimulus that was never part of the task set has an automatic effect on the oculomotor system. |
Berno Bucker; Jeroen D. Silvis; Mieke Donk; Jan Theeuwes Reward modulates oculomotor competition between differently valued stimuli Journal Article In: Vision Research, vol. 108, pp. 103–112, 2015. @article{Bucker2015a, The present work explored the effects of reward in the well-known global effect paradigm in which two objects appear simultaneously in close spatial proximity. The experiment consisted of three phases (i) a pre-training phase that served as a baseline, (ii) a reward-training phase to associate differently colored stimuli with high, low and no reward value, and (iii) a post-training phase in which rewards were no longer delivered, to examine whether objects previously associated with higher reward value attracted the eyes more strongly than those associated with low or no reward value. Unlike previous reward studies, the differently valued objects directly competed with each other on the same trial. The results showed that initially eye movements were not biased towards any particular stimulus, while in the reward-training phase, eye movements started to land progressively closer towards stimuli that were associated with a high reward value. Even though rewards were no longer delivered, this bias remained robustly present in the post-training phase. A time course analysis showed that the effect of reward was present for the fastest saccades (around 170. ms) and increased with increasing latency. Although strategic effects for slower saccades cannot be ruled out, we suggest that fast oculomotor responses became habituated and were no longer under strategic attentional control. Together the results imply that reward affects oculomotor competition in favor of stimuli previously associated high reward, when multiple reward associated objects compete for selection. |
Carsten Buhmann; Wolfgang H. W. H. Zangemeister; Stefanie Kraft; Kim Hinkelmann; Sven Krause; Christian Gerloff; Wolfgang H. Zangemeister Visual attention and saccadic oculomotor control in Parkinson's disease Journal Article In: European Neurology, vol. 73, no. 5-6, pp. 283–293, 2015. @article{Buhmann2015, In patients with Parkinson's disease (PD) we aimed at differentiating the relation between selective vi- sual attention, deficits of programming and dynamics of saccadic eye movements while searching for a target and hand-reaction time as well as hand-movement time. Visual attention is crucial for concentrating selectively on one as- pect of the visual field while ignoring other aspects. Eye movements are anatomically and functionally related to mechanisms of visual attention. Saccadic dysfunction might confound selective visual attention in PD. Methods: We studied visual selective attention in 22 medicated PD pa- tients (clinical ON status, mild to moderate disease severity) and 22 age matched controls. We looked for possible inter- ferences through oculomotor deficits. Two tasks were com- pared: free viewing of photographs and time optimal visual search of a hidden target. Visual search times (VST), task re- lated dynamics of saccades, and hand-reaction and hand- movement times were analyzed. Results: In the free viewing task mild to moderately affected PD patients did not differ statistically from healthy subjects with respect to saccade dynamics. However, patients differed significantly from healthy subjects in the time optimal visual search task with 25% lower rates of successful searches. Hand movement re- action time did not differ in both groups, whereas hand movement execution time was significantly prolonged in PD patients. Conclusion: Saccadic oculomotor control and hand movement reaction times were intact, whereas in our less severely affected treated PD patients, visual selective atten- tion was not. The highly reduced successful search rate might be related to disturbed programming and delayed execution of saccades during time optimal visual search due to decreased execution of serial-order sequential genera- tion of saccades. |
Melissa C. Bulloch; Steven L. Prime; Jonathan J. Marotta Anticipatory gaze strategies when grasping moving objects Journal Article In: Experimental Brain Research, vol. 233, no. 12, pp. 3413–3423, 2015. @article{Bulloch2015, Grasping moving objects involves both spatial and temporal predictions. The hand is aimed at a location where it will meet the object, rather than the position at which the object is seen when the reach is initiated. Previous eye–hand coordination research from our laboratory, utilizing stationary objects, has shown that participants' initial gaze tends to be directed towards the eventual location of the index finger when making a precision grasp. This experiment examined how the speed and direction of a computer-generated block's movement affect gaze and selection of grasp points. Results showed that when the target first appeared, participants anticipated the target's eventual movement by fixating well ahead of its leading edge in the direction of eventual motion. Once target movement began, participants shifted their fixation to the leading edge of the target. Upon reach initiation, participants then fixated towards the top edge of the target. As seen in our previous work with stationary objects, final fixations tended towards the final index finger contact point on the target. Moreover, gaze and kinematic analyses revealed that it was direction that most influenced fixation locations and grasp points. Interestingly, participants fixated further ahead of the target's leading edge when the direction of motion was leftward, particularly at the slower speed—possibly the result of mechanical constraints of intercepting leftward-moving targets with one's right hand. |
Antimo Buonocore; David Melcher In: Experimental Brain Research, vol. 233, no. 6, pp. 1893–1905, 2015. @article{Buonocore2015, When we explore the visual environment around us, we produce sequences of very precise eye movements aligning the objects of interest with the most sensitive part of the retina for detailed visual processing. A copy of the impending motor command, the corollary discharge, is sent as soon as the first saccade in a sequence is ready to monitor the next fixation location and correctly plan the subsequent eye movement. Neurophysiological investigations have shown that chemical interference with the corollary discharge generates a distinct pattern of spatial errors on sequential eye movements, with similar results also from clinical and TMS studies. Here, we used saccadic inhibition to interfere with the temporal domain of the first of two subsequent saccades during a standard double-step paradigm. In two experiments, we report that the temporal interference on the primary saccade led to a specific error in the final landing position of the second saccade that was consistent with previous lesion and neurophysiological studies, but without affecting the spatial characteristics of the first eye movement. On the other hand, single-step saccades were differently influence by the flash, with a general undershoot, more pronounced for larger saccadic amplitude. These findings show that a flashed visual transient can disrupt saccadic updating in a double-step task, possibly due to the mismatch between the planned and the executed saccadic eye movement. |
Michele Burigo; Pia Knoeferle Visual attention during spatial language comprehension Journal Article In: PLoS ONE, vol. 10, no. 1, pp. e0115758, 2015. @article{Burigo2015, Spatial terms such as “above”, “in front of”, and “on the left of” are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, andmodels such as the Attentional Vector Sum (AVS) predict the direction of that attention shift, from the sausage to the box for spatial utterances such as “The box is above the sausage”. To the extent that this prediction generalizes to overt gaze shifts, a listener's visual attention should shift from the sausage to the box. However, listeners tend to rapidly look at referents in their order of mention and even anticipate them based on linguistic cues, a behavior that predicts a converse attention- al shift from the box to the sausage. Four eye-tracking experiments assessed the role of overt attention in spatial language comprehension by examining to which extent visual at- tention is guided by words in the utterance and to which extent it also shifts “against the grain” of the unfolding sentence. The outcome suggests that comprehenders' visual attention is predominantly guided by their interpretation of the spatial description. Visual shifts against the grain occurred only when comprehenders had some extra time, and their absence did not affect comprehension accuracy. However, the timing of this reverse gaze shift on a trial correlated with that trial's verification time. Thus, while the timing of these gaze shifts is subtly related to the verification time, their presence is not necessary for successful verification of spatial relations. |
Melanie R. Burke; Charlotte Poyser; Ingo Schiessl Age-related deficits in visuospatial memory are due to changes in preparatory set and eye-hand coordination Journal Article In: Journals of Gerontology - Series B Psychological Sciences and Social Sciences, vol. 70, no. 5, pp. 682–690, 2015. @article{Burke2015, Objectives. Healthy aging is associated with a decline in visuospatial working memory. The nature of the changes leading to this decline in response of the eye and/or hand is still under debate. This study aims to establish whether impairments observed in performance on cognitive tasks are due to actual cognitive effects or are caused by motor-related eye–hand coordination. Methods. We implemented a computerized version of the Corsi span task. The eye and touch responses of healthy young and older adults were recorded to a series of remembered targets on a screen. Results. Results revealed differences in fixation strategies between the young and the old with increasing cognitive demand, which resulted in higher error rates in the older group. We observed increasing reaction times and durations between fixations and touches to targets, with increasing memory load and delays in both the eye and the hand in the older adults. Discussion. Our results show that older adults have difficulty maintaining a " preparatory set " for durations longer than 5 s and with increases in memory load. Attentional differences cannot account for our results, and differences in age groups appear to be principally memory related. Older adults reveal poorer eye–hand coordination, which is further confounded by increasing delay and complexity. |
Robyn Burton; Luke J. Saunders; David P. Crabb Areas of the visual field important during reading in patients with glaucoma Journal Article In: Japanese Journal of Ophthalmology, vol. 59, no. 2, pp. 94–102, 2015. @article{Burton2015, PURPOSE To determine the areas of the binocular visual field (VF) associated with reading speed in glaucomatous patients with preserved visual acuity (VA). MATERIALS AND METHODS Fifty-four patients with glaucoma (mean age ± standard deviation 70 ± 8 years) and 38 visually healthy controls (mean age 66 ± 9 years) had silent reading speeds measured using non-scrolling text on a computer setup. Participants completed three cognitive tests and tests of visual function, including the Humphrey 24-2 threshold VF test in each eye; the results were combined to produce binocular integrated VFs (IVFs). Regression analyses using the control group to correct for cognitive test scores, age and VA were conducted to obtain the IVF mean deviation (MD) and total deviation (TD) value from each IVF test location. Concordance between reading speed and TD, assessed using R (2) statistics, was ranked in order of importance to explore the parts of the IVF most likely to be linked with reading speed. RESULTS No significant association between IVF MD value and reading speed was observed (p = 0.38). Ranking individual thresholds indicated that the inferior left section of the IVF was most likely to be associated with reading speed. CONCLUSIONS Certain regions of the binocular VF impairment may be associated with reading performance even in patients with preserved VA. The inferior left region of patient IVFs may be important for changing lines during reading. |
Daniel R. Buttaccio; Nicholas D. Lange; Rick P. Thomas; Michael R. Dougherty Using a model of hypothesis generation to predict eye movements in a visual search task Journal Article In: Memory & Cognition, vol. 43, no. 2, pp. 247–265, 2015. @article{Buttaccio2015, We used a model of hypothesis generation (called HyGene; Thomas,Dougherty, Sprenger,&Harbison, 2008)to make predictions regarding the deployment of attention (as assessed via eye movements) afforded by the cued recall of target characteristics before the onset of a search array. On each trial, while being eyetracked, participants were first presented with a memory prompt that was diagnostic regard- ing the target's color in a subsequently presented search array. We assume that the memory prompts led to the generation of hypotheses (i.e., potential target characteristics) from long- term memory into working memory to guide attentional pro- cesses and ocular–motor behavior. However, given that mul- tiple hypotheses might be generated in response to a prompt, it has been unclear howthe focal hypothesis (i.e., the hypothesis that exerts the most influence on search) affects search behav- ior. We tested two possibilities using first fixation data, with the assumption that the first item fixated within a search array was the focal hypothesis. We found that a model assuming that the first itemgenerated intoworking memory guides overt attentional processeswas most consistent with the data at both the aggregate and single-participant levels of analysis. |
Korhan Buyukturkoglu; Hans Roettgers; Jens Sommer; Mohit Rana; Leonie Dietzsch; Ezgi Belkis Arikan; Ralf Veit; Rahim Malekshahi; Tilo Kircher; Niels Birbaumer; Ranganatha Sitaram; Sergio Ruiz Self-regulation of anterior insula using real-time fMRI and its behavioral effects in obsessive compulsive disorder: A feasibility study Journal Article In: PLoS ONE, vol. 10, no. 8, pp. e0135872, 2015. @article{Buyukturkoglu2015, Introduction: Obsessive-compulsive disorder (OCD) is a common and chronic condition that can have disabling effects throughout the patient's lifespan. Frequent symptoms among OCD patients include fear of contamination and washing compulsions. Several studies have shown a link between contamination fears, disgust over-reactivity, and insula activation in OCD. In concordance with the role of insula in disgust processing, new neural models based on neuroimaging studies suggest that abnormally high activations of insula could be implicated in OCD psychopathology, at least in the subgroup of patients with contamination fears and washing compulsions. Methods: In the current study, we used a Brain Computer Interface (BCI) based on real-time func- tional magnetic resonance imaging (rtfMRI) to aid OCD patients to achieve down-regula- tion of the Blood Oxygenation Level Dependent (BOLD) signal in anterior insula. Our first aim was to investigate whether patients with contamination obsessions and washing com- pulsions can learn to volitionally decrease (down-regulate) activity in the insula in the pres- ence of disgust/anxiety provoking stimuli. Our second aimwas to evaluate the effect of down-regulation on clinical, behavioural and physiological changes pertaining to OCD symptoms. Hence, several pre- and post-training measures were performed, i.e., con- fronting the patient with a disgust/anxiety inducing real-world object (Ecological Disgust Test), and subjective rating and physiological responses (heart rate, skin conductance level) of disgust towards provoking pictures. Results: Results of this pilot study, performed in 3 patients (2 females), show that OCD patients can gain self-control of the BOLD activity of insula, albeit to different degrees. In two patients positive changes in behaviour in the EDT were observed following the rtfMRI trainings. Behavioural changes were also confirmed by reductions in the negative valence and in the subjective perception of disgust towards symptom provoking images. Conclusion: Although preliminary, results of this study confirmed that insula down-regulation is possible in patients suffering from OCD, and that volitional decreases of insula activation could be used for symptom alleviation in this disorder. |
Zoya Bylinskii; Phillip Isola; Constance Bainbridge; Antonio Torralba; Aude Oliva Intrinsic and extrinsic effects on image memorability Journal Article In: Vision Research, vol. 116, pp. 165–178, 2015. @article{Bylinskii2015, Previous studies have identified that images carry the attribute of memorability, a predictive value of whether a novel image will be later remembered or forgotten. Here we investigate the interplay between intrinsic and extrinsic factors that affect image memorability. First, we find that intrinsic differences in memorability exist at a finer-grained scale than previously documented. Second, we test two extrinsic factors: image context and observer behavior. Building on prior findings that images that are distinct with respect to their context are better remembered, we propose an information-theoretic model of image distinctiveness. Our model can automatically predict how changes in context change the memorability of natural images. In addition to context, we study a second extrinsic factor: where an observer looks while memorizing an image. It turns out that eye movements provide additional information that can predict whether or not an image will be remembered, on a trial-by-trial basis. Together, by considering both intrinsic and extrinsic effects on memorability, we arrive at a more complete and fine-grained model of image memorability than previously available. |
Laura Cacciamani; Paige E. Scalf; Mary A. Peterson Neural evidence for competition-mediated suppression in the perception of a single object Journal Article In: Cortex, vol. 72, pp. 124–139, 2015. @article{Cacciamani2015, Multiple objects compete for representation in visual cortex. Competition may also underlie the perception of a single object. Computational models implement object perception as competition between units on opposite sides of a border. The border is assigned to the winning side, which is perceived as an object (or "figure"), whereas the other side is perceived as a shapeless ground. Behavioral experiments suggest that the ground is inhibited to a degree that depends on the extent to which it competed for object status, and that this inhibition is relayed to low-level brain areas. Here, we used fMRI to assess activation for ground regions of task-irrelevant novel silhouettes presented in the left or right visual field (LVF or RVF) while participants performed a difficult task at fixation. Silhouettes were designed so that the insides would win the competition for object status. The outsides (grounds) suggested portions of familiar objects in half of the silhouettes and novel objects in the other half. Because matches to object memories affect the competition, these two types of silhouettes operationalized, respectively, high competition and low competition from the grounds. The results showed that activation corresponding to ground regions was reduced for high- versus low-competition silhouettes in V4, where receptive fields (RFs) are large enough to encompass the familiar objects in the grounds, and in V1/V2, where RFs are much smaller. These results support a theory of object perception involving competition-mediated ground suppression and feedback from higher to lower levels. This pattern of results was observed in the left hemisphere (RVF), but not in the right hemisphere (LVF). One explanation of the lateralized findings is that task-irrelevant silhouettes in the RVF captured attention, allowing us to observe these effects, whereas those in the LVF did not. Experiment 2 provided preliminary behavioral evidence consistent with this possibility. |
Lijing Chen; Yufang Yang Emphasizing the only character: EMPHASIS, attention and contrast Journal Article In: Cognition, vol. 136, pp. 222–227, 2015. @article{Chen2015b, In conversations, pragmatic information such as emphasis is important for identifying the speaker's/writer's intention. The present research examines the cognitive processes involved in emphasis processing. Participants read short discourses that introduced one or two character(s), with the character being emphasized or non-emphasized in subsequent texts. Eye movements showed that: (1) early processing of the emphasized word was facilitated, which may have been due to increased attention allocation, whereas (2) late integration of the emphasized character was inhibited when the discourse involved only this character. These results indicate that it is necessary to include other characters as contrastive characters to facilitate the integration of an emphasized character, and support the existence of a relationship between Emphasis and Contrast computation. Taken together, our findings indicate that both attention allocation and contrast computation are involved in emphasis processing, and support the incremental nature of sentence processing and the importance of contrast in discourse comprehension. |
Nigel T. M. Chen; Patrick J. F. Clarke; Tamara L. Watson; Colin MacLeod; Adam J. Guastella Attentional bias modification facilitates attentional control mechanisms: Evidence from eye tracking Journal Article In: Biological Psychology, vol. 104, pp. 139–146, 2015. @article{Chen2015d, Social anxiety is thought to be maintained by biased attentional processing towards threatening information. Research has further shown that the experimental attenuation of this bias, through the implementation of attentional bias modification (ABM), may serve to reduce social anxiety vulnerability. However, the mechanisms underlying ABM remain unclear. The present study examined whether inhibitory attentional control was associated with ABM. A non-clinical sample of participants was randomly assigned to receive either ABM or a placebo task. To assess pre-post changes in attentional control, participants were additionally administered an emotional antisaccade task. ABM participants exhibited a subsequent shift in attentional bias away from threat as expected. ABM participants further showed a subsequent decrease in antisaccade cost, indicating a general facilitation of inhibitory attentional control. Mediational analysis revealed that the shift in attentional bias following ABM was independent to the change in attentional control. The findings suggest that the mechanisms of ABM are multifaceted. |
Po-Heng Chen; Jie-Li Tsai In: Language and Linguistics, vol. 16, no. 4, pp. 555–586, 2015. @article{Chen2015, The purpose of the present study is twofold: (1) To examine whether the syntactic category constraint can determine the semantic resolution of Chinese syntactic category ambiguous words; and (2) to investigate whether the syntactic category of alternative meanings of Chinese homographs can influence the subordinate bias effect (SBE) during lexical ambiguity resolution. In the present study, four types of Chinese biased homo- graphs (NN, VV, VN, and NV) were embedded into syntactically and semantically subordinate-biased sentences. Each homograph was assigned a frequency-matched unambiguous word as control, which could fit into the same sentence frame. Participants' eye movements were recorded as they read each sentence. In general, the results showed that in a subordinate-biased context, (1) the SBE for the four types of homograph was significant only in the second-pass reading on the post-target words and (2) numerically, the NV homographs revealed a larger effect size of SBE than VN homographs on both target and post-target words. Our findings support the constraint-satisfaction models, suggesting that the syntactic category constraint is not the only factor influencing the semantic resolution of syntactic category ambiguous words, which is opposed to the prediction of the syntax-first models. |
Qi Chen; Daniel Mirman Interaction between phonological and semantic representations: Time matters Journal Article In: Cognitive Science, vol. 39, no. 3, pp. 538–558, 2015. @article{Chen2015a, Computational modeling and eye-tracking were used to investigate how phonological and semantic information interact to influence the time course of spoken word recognition. We extended our recent models (Chen & Mirman, 2012; Mirman, Britt, & Chen, 2013) to account for new evidence that competition among phonological neighbors influences activation of semantically related concepts during spoken word recognition (Apfelbaum, Blumstein, & McMurray, 2011). The model made a novel prediction: Semantic input modulates the effect of phonological neighbors on target word processing, producing an approximately inverted-U-shaped pattern with a high phonological density advantage at an intermediate level of semantic input-in contrast to the typical disadvantage for high phonological density words in spoken word recognition. This prediction was confirmed with a new analysis of the Apfelbaum et al. data and in a visual world paradigm experiment with preview duration serving as a manipulation of strength of semantic input. These results are consistent with our previous claim that strongly active neighbors produce net inhibitory effects and weakly active neighbors produce net facilitative effects. |
Sheng-Chang Chen; Mi-Shan Hsiao; Hsiao-Ching She In: Computers in Human Behavior, vol. 53, pp. 169–180, 2015. @article{Chen2015e, This study examined the effectiveness of the different spatial abilities of high school students who constructed their understanding of the atomic orbital concepts and mental models after learning with multimedia learning materials presented in static and dynamic modes of 3D representation. A total of 60 high school students participated in this study and were randomly assigned into static and dynamic 3D representation groups. The dependent variables included a pre-test and post-test on atomic orbital concepts, an atomic orbital mental model construction test, and students' eye-movement behaviors. Results showed that students who learned with dynamic 3D representation allocated a significantly greater amount of attention, exhibited better performance on the mental model test, and constructed more sophisticated 3D hybridizations of the orbital mental model than the students in the static 3D group. The logistic regression result indicated that the dynamic 3D representation group students' number of saccades and number of re-readings were positive predictors, while the number of fixations was the negative predictor, for developing the students' 3D mental models of an atomic orbital. High-spatial-ability students outperformed the low-spatial-ability students on the atomic orbital conceptual test and mental model construction, while both types of students allocated similar amounts of attention to the 3D representations. Our results demonstrated that low-spatial-ability students' eye movement behaviors positively correlate with their performance on the atomic orbital concept test and the mental model construction. |
Xinxin Chen; Hongyan Yu; Fang Yu What is the optimal number of response alternatives for rating scales? From an information processing perspective Journal Article In: Journal of Marketing Analytics, vol. 3, no. 2, pp. 69–78, 2015. @article{Chen2015f, Rating scales are measuring instruments that are widely used in social science research. However, many different rating scale formats are used in the literature, differing specifically in the number of response alternatives offered. Previous studies on the optimal number of response alternatives have focused exclusively on the participants' final response results, rather than on the participants' information processing. We used an eye-tracking study to explore this issue from an information processing perspective. We analyzed the information processing in six scales with different response alternatives. We compared the reaction times, net acquiescence response styles, extreme response styles and proportional changes in the response alternatives of the six scales. Our results suggest that the optimal number of response alternatives is five. |
Joseph D. Chisholm; Alan Kingstone Action video games and improved attentional control: Disentangling selection-and response-based processes Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 5, pp. 1430–1436, 2015. @article{Chisholm2015, Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus–response processes that impact human performance. |
Joseph D. Chisholm; Alan Kingstone Action video game players' visual search advantage extends to biologically relevant stimuli Journal Article In: Acta Psychologica, vol. 159, pp. 93–99, 2015. @article{Chisholm2015a, Research investigating the effects of action video game experience on cognition has demonstrated a host of performance improvements on a variety of basic tasks. Given the prevailing evidence that these benefits result from efficient control of attentional processes, there has been growing interest in using action video games as a general tool to enhance everyday attentional control. However, to date, there is little evidence indicating that the benefits of action video game playing scale up to complex settings with socially meaningful stimuli - one of the fundamental components of our natural environment. The present experiment compared action video game player (AVGP) and non-video game player (NVGP) performance on an oculomotor capture task that presented participants with face stimuli. In addition, the expression of a distractor face was manipulated to assess if action video game experience modulated the effect of emotion. Results indicate that AVGPs experience less oculomotor capture than NVGPs; an effect that was not influenced by the emotional content depicted by distractor faces. It is noteworthy that this AVGP advantage emerged despite participants being unaware that the investigation had to do with video game playing, and participants being equivalent in their motivation and treatment of the task as a game. The results align with the notion that action video game experience is associated with superior attentional and oculomotor control, and provides evidence that these benefits can generalize to more complex and biologically relevant stimuli. |
Wonil Choi; John M. Henderson Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing Journal Article In: Neuropsychologia, vol. 75, pp. 109–118, 2015. @article{Choi2015, Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network. |
Wonil Choi; Matthew W. Lowder; Fernanda Ferreira; John M. Henderson Individual differences in the perceptual span during reading: Evidence from the moving window technique Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 7, pp. 2463–2475, 2015. @article{Choi2015a, We report the results of an eye tracking experiment that used the gaze-contingent moving window technique to examine individual differences in the size of readers' perceptual span. Participants read paragraphs while the size of the rightward window of visible text was systematically manipulated across trials. In addition, participants completed a large battery of individual-difference measures representing two cognitive constructs: language ability and oculomotor processing speed. Results showed that higher scores on language ability measures and faster oculomotor processing speed were associated with faster reading times and shorter fixation durations. More interestingly, the size of readers' perceptual span was modulated by individual differences in language ability but not by individual differences in oculomotor processing speed, suggesting that readers with greater language proficiency are more likely to have efficient mechanisms to extract linguistic information beyond the fixated word. |
John Christie; Matthew D. Hilchey; Ramesh Mishra; Raymond M. Klein Eye movements are primed toward the center of multiple stimuli even when the interstimulus distances are too large to generate saccade averaging Journal Article In: Experimental Brain Research, vol. 233, no. 5, pp. 1541–1549, 2015. @article{Christie2015, Prior oculomotor research has established that saccades tend to land near the center of multiple saccade targets when they are near each other. This saccade averaging phenomenon (or global effect) has been ascribed to short-distance lateral excitation between neurons in the superior colliculus. Further, at greater inter-stimulus distances, eye movements tend toward the individual elements. This transition to control by local elements (individuation) with inter-stimulus distance has been attributed to long-range lateral inhibition between neurons in winner-take-all models of oculomotor behavior. We hypothesized that the traditional method of requiring a saccade to an array of multiple, simultaneous targets may entail response ambiguity that intensifies with distance. We resolved the ambiguity by focussing on reaction time of our human participants to a single saccade target after one or more simultaneous priming stimuli. At a 50-ms prime-target interval, saccadic reaction time was shortest for targets closer to the center of the prime stimuli independent of the distance between the primes. This effect was gone at 400 ms. These findings challenge the typical inferences about the neural control of oculomotor behavior that have been derived from the boundary between saccade averaging and individuation and provide a new method to explore eye movements with lessened impact from decision processes. |
Antonios I. Christou; Yvonne Wallis; Hayley Bair; Hayley Crawford; Steven Frisson; Maurice P. Zeegers; Joseph P. McCleery BDNF Val66Met and 5-HTTLPR Genotype are each associated with visual scanning patterns of faces in young children Journal Article In: Frontiers in Behavioral Neuroscience, vol. 9, pp. 175, 2015. @article{Christou2015, Previous studies have documented both neuroplasticity-related BDNF VaI(66)Met and emotion regulation-related 5-HTTLPR polymorphisms as genetic variants that contribute to the processing of emotions from faces. More specifically, research has shown the BDNF Met allele and the 5-HTTLPR Short allele to be associated with mechanisms of negative affectivity that relate to susceptibility for psychopathology. We examined visual scanning pathways in response to angry, happy, and neutral faces in relation to BDNF VaI(66)Met and 5-HTTLPR genotyping in 49 children aged 4-7 years. Analyses revealed that variations in the visual processing of facial expressions of anger interacted with BDNF VaI(66)Met genotype, such that children who carried at least one low neuroplasticity Met allele exhibited a vigilance avoidance pattern of visual scanning compared to homozygotes for the high neuroplasticity Val allele. In a separate investigation of eye gaze towards the eye versus mouth regions of neutral faces, we observed that short allele 5-HTTLPR carriers exhibited reduced looking at the eye region compared with those with the higher serotonin uptake Long allele. Together, these findings suggest that genetic mechanisms early in life may influence the establishment of patterns of visual scanning of environmental stressors, which in conjunction with other factors such as negative life events, may lead to psychological difficulties and disorders in the later adolescent and adult years. |
Alasdair D. F. Clarke; Micha Elsner; Hannah Rohde Giving good directions: Order of mention reflects visual salience Journal Article In: Frontiers in Psychology, vol. 6, pp. 1793, 2015. @article{Clarke2015, In complex stimuli, there are many different possible ways to refer to a specified target. Previous studies have shown that when people are faced with such a task, the content of their referring expression reflects visual properties such as size, salience and clutter. Here, we extend these findings and present evidence that (i) the influence of visual perception on sentence construction goes beyond content selection and in part determines the order in which different objects are mentioned and (ii) order of mention influences comprehension. Study 1 (a corpus study of reference productions) shows that when a speaker uses a relational description to mention a salient object, that object is treated as being in the common ground and is more likely to be mentioned first. Study 2 (a visual search study) asks participants to listen to referring expressions and find the specified target; in keeping with the above result, we find that search for easy-to-find targets is faster when the target is mentioned first, while search for harder-to-find targets is facilitated by mentioning the target later, after a landmark in a relational description. Our findings show that seemingly low-level and disparate mental “modules” like perception and sentence planning interact at a high level and in task-dependent ways. |
S. Clavagnier; Serge O. Dumoulin; R. F. Hess Is the cortical deficit in amblyopia due to reduced cortical magnification, loss of neural resolution, or neural disorganization? Journal Article In: Journal of Neuroscience, vol. 35, no. 44, pp. 14740–14755, 2015. @article{Clavagnier2015, The neural basis of amblyopia is a matter of debate. The following possibilities have been suggested: loss of foveal cells, reduced cortical magnification, loss of spatial resolution of foveal cells, and topographical disarray in the cellular map. To resolve this we undertook a population receptive field (pRF) functional magnetic resonance imaging analysis in the central field in humans with moderate-to-severe amblyopia. We measured the relationship between averaged pRF size and retinal eccentricity in retinotopic visual areas. Results showed that cortical magnification is normal in the foveal field of strabismic amblyopes. However, the pRF sizes are enlarged for the amblyopic eye. We speculate that the pRF enlargement reflects loss of cellular resolution or an increased cellular positional disarray within the representation of the amblyopic eye. |
Meghan Clayards; Oliver Niebuhr; M. Gareth Gaskell The time course of auditory and language-specific mechanisms in compensation for sibilant assimilation Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 1, pp. 311–328, 2015. @article{Clayards2015, Models of spoken-word recognition differ onwheth- er compensation for assimilation is language-specific or depends on general auditory processing. English and French participants were taught words that began or ended with the sibilants /s/ and /∫/. Both languages exhibit some assimilation in sibilant sequences (e.g., /s/ becomes like [∫]in dress shop and classe chargée), but they differ in the strength and predomi- nance of anticipatory versus carryover assimilation. After train- ing, participants were presentedwith novelwords embedded in sentences, some of which contained an assimilatory context either preceding or following. A continuum of target sounds ranging from [s] to [∫] was spliced into the novel words, representing a range of possible assimilation strengths. Listeners' perceptions were examined using a visual-world eyetracking paradigm in which the listener clicked on pictures matching the novel words. We found two distinct language- general context effects: a contrastive effect when the assimilat- ing context preceded the target, and flattening of the sibilant categorization function (increased ambiguity) when the assim- ilating context followed. Furthermore, we found that English but not French listeners were able to resolve the ambiguity created by the following assimilatory context, consistent with their greater experience with assimilation in this context. The combination of these mechanisms allows listeners to deal flexibly with variability in speech forms. |
Evy Cleeren; Cindy Casteels; Karolien Goffin; Peter Janssen; Wim Van Paesschen Ictal perfusion changes associated with seizure progression in the amygdala kindling model in the rhesus monkey Journal Article In: Epilepsia, vol. 56, no. 9, pp. 1366–1375, 2015. @article{Cleeren2015, OBJECTIVE: Amygdala kindling is a widely used animal model for studying mesial temporal lobe epileptogenesis. In the macaque monkey, electrical amygdala kindling develops slowly and provides an opportunity for investigating ictal perfusion changes during epileptogenesis. METHODS: Two rhesus monkeys were electrically kindled through chronically implanted electrodes in the right amygdala over a period of 16 and 17 months. Ictal perfusion single photon emission computed tomography (SPECT) imaging was performed during each of the four predefined clinical stages. RESULTS: Afterdischarge duration increased slowly over 477 days for monkey K and 515 days for monkey S (18 ± 8 s in stage I; 52 ± 13 s in stage IV). During this time, the animals progressed through four clinical stages ranging from interrupting ongoing behavior to bilateral convulsions. Ictal SPECT perfusion imaging showed well-localized but widely distributed regions of hyperperfusion and hypoperfusion, in both cortical and subcortical structures, at every seizure stage. A large portion of the ictal network was involved in the early stages of epileptogenesis and subsequently expanded over time as seizure severity evolved. SIGNIFICANCE: Our data indicate that the different mesial temporal lobe seizure types occur within a common network affecting several parts of the brain, and that seizure severity may be determined by seizure-induced epileptogenesis within a bihemispheric network that is implicated from the start of the process. |
Justine Cléry; Olivier Guipponi; Soline Odouard; Claire Wardak; Suliann Ben Hamed Impact prediction by looming visual stimuli enhances tactile detection Journal Article In: Journal of Neuroscience, vol. 35, no. 10, pp. 4179–4189, 2015. @article{Clery2015, From an ecological point of view, approaching objects are potentially more harmful than receding objects. A predator, a dominant conspecific, or a mere branch coming up at high speed can all be dangerous if one does not detect them and produce the appropriate escape behavior fast enough. And indeed, looming stimuli trigger stereotyped defensive responses in both monkeys and human infants. However, while the heteromodal somatosensory consequences of visual looming stimuli can be fully predicted by their spatiotemporal dynamics, few studies if any have explored whether visual stimuli looming toward the face predictively enhance heteromodal tactile sensitivity around the expected time of impact and at its expected location on the body. In the present study, we report that, in addition to triggering a defensive motor repertoire, looming stimuli toward the face provide the nervous system with predictive cues that enhance tactile sensitivity on the face. Specifically, we describe an enhancement of tactile processes at the expected time and location of impact of the stimulus on the face. We additionally show that a looming stimulus that brushes past the face also enhances tactile sensitivity on the nearby cheek, suggesting that the space close to the face is incorporated into the subjects' body schema. We propose that this cross-modal predictive facilitation involves multisensory convergence areas subserving the representation of a peripersonal space and a safety boundary of self. |
Meaghan Clough; Laura Mitchell; Lynette Millist; Nathaniel Lizak; Shin Beh; Teresa C. Frohman; Elliot M. Frohman; Owen B. White; Joanne Fielding Ocular motor measures of cognitive dysfunction in multiple sclerosis I: Inhibitory control Journal Article In: Journal of Neurology, vol. 262, no. 5, pp. 1130–1137, 2015. @article{Clough2015, Our ability to control and inhibit behaviours that are inappropriate, unsafe, or no longer required is crucial for functioning successfully in complex environments. Here, we investigated whether a series of ocular motor (OM) inhibition tasks could dissociate deficits in patients with multiple sclerosis (MS), including patients with only a probable diagnosis (clinically isolated syndrome: CIS), from healthy individuals as well as a function of increasing disease duration. 25 patients with CIS, 25 early clinically definite MS patients (CDMS: ≤7 years of diagnosis), 24 late CDMS patients (>7 years from diagnosis), and 25 healthy controls participated. All participants completed a series of classic OM inhibition tasks [antisaccade (AS) task, memory-guided (MG) task, endogenous cue task], and a neuropsychological inhibition task [paced auditory serial addition test (PASAT)]. Clinical disability was characterised in CDMS patients using the Expanded Disability Severity Scale (EDSS). OM (latency and error) and PASAT performance were compared between patient groups and controls, as well as a function of disease duration. For CDMS patients only, results were correlated with EDSS score. All patient groups made more errors than controls on all OM tasks; error rate did not increase with increasing disease duration. In contrast, saccade latency (MG and endogenous cue tasks) was found to worsen with increasing disease duration. PASAT performance did not discriminate patient groups or disease duration. The EDSS did not correlate with any measure. These OM measures appear to dissociate deficit between patients at different disease durations. This suggests their utility as a measure of progression from the earliest inception of the disease. |
Moreno I. Coco; Frank Keller Integrating mechanisms of visual guidance in naturalistic language production Journal Article In: Cognitive Processing, vol. 16, no. 2, pp. 131–150, 2015. @article{Coco2015a, Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention. |
Moreno I. Coco; Frank Keller The interaction of visual and linguistic saliency during syntactic ambiguity resolution Journal Article In: Quarterly Journal of Experimental Psychology, vol. 68, no. 1, pp. 46–74, 2015. @article{Coco2015, Psycholinguistic research using the visual world paradigm has shown that the processing ofsentences is constrained by the visual context in which they occur. Recently, there has been growing interest in the interactions observed when both language and vision provide relevant information during sentence pro- cessing. In three visual world experiments on syntactic ambiguity resolution, we investigate how visual and linguistic information influence the interpretation ofambiguous sentences. We hypothesize that (1) visual and linguistic information both constrain which interpretation is pursued by the sentence pro- cessor, and (2) the two types of information act upon the interpretation of the sentence at different points during processing. In Experiment 1, we show that visual saliency is utilized to anticipate the upcoming arguments ofa verb. In Experiment 2, we operationalize linguistic saliency using intonational breaks and demonstrate that these give prominence to linguistic referents. These results confirm pre- diction (1). In Experiment 3, we manipulate visual and linguistic saliency together and find that both types of information are used, but at different points in the sentence, to incrementally update its current interpretation. This finding is consistent with prediction (2). Overall, our results suggest an adaptive processing architecture in which different types of information are used when they become available, optimizing different aspects of situated language processing. |
Russell Cohen Hoffing; Aaron R. Seitz Pupillometry as a glimpse into the neurochemical basis of human memory encoding Journal Article In: Journal of Cognitive Neuroscience, vol. 27, no. 4, pp. 765–774, 2015. @article{CohenHoffing2015, Neurochemical systems are well studied in animal learning; however, ethical issues limit methodologies to explore these systems in humans. Pupillometry provides a glimpse into the brain ʼs neurochemical systems, where pupil dynamics in monkeys have been linked with locus coeruleus activity, which releases norepinephrine (NE) throughout the brain. Here, we use pupil dynamics as a surrogate measure of neurochemical activity to explore the hypothesis that NE is involved in modulating memory encoding. We examine this using a task-irrelevant learning paradigm in which learning is boosted for stimuli temporally paired with task targets. We show that participants better recognize images that are paired with task targets than distractors and, in correspondence, that pupil size changes more for target-paired than distractor-paired images. To further investigate the hypothesis that NE nonspecifically guides learning for stimuli that are present with its release, a second procedure was used that employed an unexpected sound to activate the LC –NE system and induce pupil-size changes; results indicated a corresponding increase in memorization of images paired with the unexpected sounds. Together, these results suggest a relationship between the LC–NE system, pupil-size changes, and human memory encoding |
Andrew L. Cohen; Adrian Staub Within-subject consistency and between-subject variability in Bayesian reasoning strategies Journal Article In: Cognitive Psychology, vol. 81, pp. 26–47, 2015. @article{Cohen2015, It is well known that people tend to perform poorly when asked to determine a posterior probability on the basis of a base rate, true positive rate, and false positive rate. The present experiments assessed the extent to which individual participants nevertheless adopt consistent strategies in these Bayesian reasoning problems, and investigated the nature of these strategies. In two experiments, one laboratory-based and one internet-based, each participant completed 36 problems with factorially manipulated probabilities. Many participants applied consistent strategies involving use of only one of the three probabilities provided in the problem, or additive combination of two of the probabilities. There was, however, substantial variability across participants in which probabilities were taken into account. In the laboratory experiment, participants' eye movements were tracked as they read the problems. There was evidence of a relationship between information use and attention to a source of information. Participants' self-assessments of their performance, however, revealed little confidence that the strategies they applied were actually correct. These results suggest that the hypothesis of base rate neglect actually underestimates people's difficulty with Bayesian reasoning, but also suggest that participants are aware of their ignorance. |
Noga Cohen; Natali Moyal; Avishai Henik Executive control suppresses pupillary responses to aversive stimuli Journal Article In: Biological Psychology, vol. 112, pp. 1–11, 2015. @article{Cohen2015a, Adaptive behavior depends on the ability to effectively regulate emotional responses. Continuous failure in the regulation of emotions can lead to heightened physiological reactions and to various psychopathologies. Recently, several behavioral and neuroimaging studies showed that exertion of executive control modulates emotion. Executive control is a high-order operation involved in goal-directed behavior, especially in the face of distractors or temptations. However, the role of executive control in regulating emotion-related physiological reactions is unknown. Here we show that exercise of executive control modulates reactivity of both the sympathetic and the parasympathetic components of the autonomic nervous system. Specifically, we demonstrate that both pupillary light reflex and pupil dilation for aversive stimuli are attenuated following recruitment of executive control. These findings offer new insights into the very basic mechanisms of emotion processing and regulation, and can lead to novel interventions for people suffering from emotion dysregulation psychopathologies. |
Matthew R. Cavanaugh; Ruyuan Zhang; Michael D. Melnick; Anasuya Das; Mariel Roberts; Duje Tadin; Marisa Carrasco; Krystel R. Huxlin Visual recovery in cortical blindness is limited by high internal noise Journal Article In: Journal of Vision, vol. 15, no. 10, pp. 1–18, 2015. @article{Cavanaugh2015, Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. |
Dario Cazzoli; Simon Jung; Thomas Nyffeler; Tobias Nef; Pascal Wurtz; Urs P. Mosimann; René M. Müri The role of the right frontal eye field in overt visual attention deployment as assessed by free visual exploration Journal Article In: Neuropsychologia, vol. 74, pp. 37–41, 2015. @article{Cazzoli2015, The frontal eye field (FEF) is known to be involved in saccade generation and visual attention control. Studies applying covert attentional orienting paradigms have shown that the right FEF is involved in attentional shifts to both the left and the right hemifield. In the current study, we aimed at examining the effects of inhibitory continuous theta burst (cTBS) transcranial magnetic stimulation over the right FEF on overt attentional orienting, as measured by a free visual exploration paradigm.In forty-two healthy subjects, free visual exploration of naturalistic pictures was tested in three conditions: (1) after cTBS over the right FEF; (2) after cTBS over a control site (vertex); and, (3) without any stimulation. The results showed that cTBS over the right FEF-but not cTBS over the vertex-triggered significant changes in the spatial distribution of the cumulative fixation duration. Compared to the group without stimulation and the group with cTBS over the vertex, cTBS over the right FEF decreased cumulative fixation duration in the left and in the right peripheral regions, and increased cumulative fixation duration in the central region.The present study supports the view that the right FEF is involved in the bilateral control of not only covert, but also of overt, peripheral visual attention. |
Dario Cazzoli; René M. Müri; Christopher Kennard; Clive R. Rosenthal The role of the right posterior parietal cortex in letter migration between words Journal Article In: Journal of Cognitive Neuroscience, vol. 27, no. 2, pp. 377–386, 2015. @article{Cazzoli2015a, When briefly presented with pairs of words, skilled readers can sometimes report words with migrated letters (e.g., they report hunt when presented with the words hint and hurt). This and other letter migration phenomena have been often used to investigate factors that influence reading such as letter position coding. However, the neural basis of letter migration is poorly understood. Previous evidence has implicated the right posterior parietal cortex (PPC) in processing visuospatial attributes and lexical properties during word reading. The aim of this study was to assess this putative role by combining an inhibitory TMS protocol with a letter migration paradigm, which was designed to examine the contributions of visuospatial attributes and lexical factors. Temporary interference with the right PPC led to three specific effects on letter migration. First, the number of letter migrations was significantly increased only in the group with active stimulation (vs. a sham stimulation group or a control group without stimulation), and there was no significant effect on other error types. Second, this effect occurred only when letter migration could result in a meaningful word (migration vs. control context). Third, the effect of active stimulation on the number of letter migrations was lateralized to target words presented on the left. Our study thus demonstrates that the right PPC plays a specific and causal role in the phenomenon of letter migration. The nature of this role cannot be explained solely in terms of visuospatial attention, rather it involves an interplay between visuospatial attentional and word reading-specific factors. |
Aaron L. Cecala; Ivan Smalianchuk; Sanjeev B. Khanna; Matthew A. Smith; Neeraj J. Gandhi Context cue-dependent saccadic adaptation in rhesus macaques cannot be elicited using color Journal Article In: Journal of Neurophysiology, vol. 114, no. 1, pp. 570–584, 2015. @article{Cecala2015, When the head does not move, rapid movements of the eyes called saccades are used to redirect the line of sight. Saccades are defined by a series of metrical and kinematic (evolution of a movement as a function of time) relationships. For example, the amplitude of a saccade made from one visual target to another is roughly 90% of the distance between the initial fixation point (T0) and the peripheral target (T1). However, this stereotypical relationship between saccade amplitude and initial retinal error (T1-T0) may be altered, either increased or decreased, by surreptitiously displacing a visual target during an ongoing saccade. This form of motor learning (called saccadic adaptation) has been described in both humans and monkeys. Recent experiments in humans and monkeys have suggested that internal (proprioceptive) and external (target shape, color, and/or motion) cues may be used to produce context-dependent adaptation. We tested the hypothesis that an external contextual cue (target color) could be used to evoke differential gain (actual saccade/initial retinal error) states in rhesus monkeys. We did not observe differential gain states correlated with target color regardless of whether targets were displaced along the same vector as the primary saccade or perpendicular to it. Furthermore, this observation held true regardless of whether adaptation trials using various colors and intrasaccade target displacements were randomly intermixed or presented in short or long blocks of trials. These results are consistent with hypotheses that state that color cannot be used as a contextual cue and are interpreted in light of previous studies of saccadic adaptation in both humans and monkeys. |
Benedetta Cesqui; Maura Mezzetti; Francesco Lacquaniti; Andrea D'Avella Gaze behavior in one-handed catching and its relation with interceptive performance: What the eyes can't tell Journal Article In: PLoS ONE, vol. 10, no. 3, pp. e0119445, 2015. @article{Cesqui2015, In ball sports, it is usually acknowledged that expert athletes track the ball more accurately than novices. However, there is also evidence that keeping the eyes on the ball is not always necessary for interception. Here we aimed at gaining new insights on the extent to which ocular pursuit performance is related to catching performance. To this end, we analyzed eye and head movements of nine subjects catching a ball projected by an actuated launching apparatus. Four different ball flight durations and two different ball arrival heights were tested and the quality of ocular pursuit was characterized by means of several timing and accuracy parameters. Catching performance differed across subjects and depended on ball flight characteristics. All subjects showed a similar sequence of eye movement events and a similar modulation of the timing of these events in relation to the characteristics of the ball trajectory. On a trial-by-trial basis there was a significant relationship only between pursuit duration and catching performance, confirming that keeping the eyes on the ball longer increases catching success probability. Ocular pursuit parameters values and their dependence on flight conditions as well as the eye and head contributions to gaze shift differed across subjects. However, the observed average individual ocular behavior and the eye-head coordination patterns were not directly related to the individual catching performance. These results suggest that several oculomotor strategies may be used to gather information on ball motion, and that factors unrelated to eye movements may underlie the observed differences in interceptive performance. |
Sarah Chabal; Viorica Marian Speakers of different languages process the visual world differently Journal Article In: Journal of Experimental Psychology: General, vol. 144, no. 3, pp. 539–550, 2015. @article{Chabal2015, Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (PsycINFO Database Record |
Sarah Chabal; Scott R. Schroeder; Viorica Marian Audio-visual object search is changed by bilingual experience Journal Article In: Attention, Perception, & Psychophysics, vol. 77, no. 8, pp. 2684–2693, 2015. @article{Chabal2015a, The current study examined the impact of language experience on the ability to efficiently search for objects in the face of distractions. Monolingual and bilingual participants completed an ecologically-valid, object-finding task that contained conflicting, consistent, or neutral auditory cues. Bilinguals were faster than monolinguals at locating the target item, and eye movements revealed that this speed advantage was driven by bilinguals' ability to overcome interference from visual distractors and focus their attention on the relevant object. Bilinguals fixated the target object more often than did their monolingual peers, who, in contrast, attended more to a distracting image. Moreover, bilinguals', but not monolinguals', object-finding ability was positively associated with their executive control ability. We conclude that bilinguals' executive control advantages extend to real-world visual processing and object finding within a multi-modal environment. |
Jason L. Chan; Michael J. Koval; Thilo Womelsdorf; Stephen G. Lomber; Stefan Everling Dorsolateral prefrontal cortex deactivation in monkeys reduces preparatory beta and gamma power in the superior colliculus Journal Article In: Cerebral Cortex, vol. 25, no. 12, pp. 4704–4714, 2015. @article{Chan2015, Cognitive control requires the selection and maintenance of task-relevant stimulus-response associations, or rules. The dorsolateral prefrontal cortex (DLPFC) has been implicated by lesion, functional imaging, and neurophysiological studies to be involved in encoding rules, but the mechanisms by which it modulates other brain areas are poorly understood. Here, the functional relationship of the DLPFC with the superior colliculus (SC) was investigated by bilaterally deactivating the DLPFC while recording local field potentials (LFPs) in the SC in monkeys performing an interleaved pro- and antisaccade task. Event-related LFPs showed differences between pro- and antisaccades and responded prominently to stimulus presentation. LFP power after stimulus onset was higher for correct saccades than erroneous saccades. Deactivation of the DLPFC did not affect stimulus onset related LFP activity, but reduced high beta (20-30 Hz) and high gamma (60-150 Hz) power during the preparatory period for both pro- and antisaccades. Spike rate during the preparatory period was positively correlated with gamma power and this relationship was attenuated by DLPFC deactivation. These results suggest that top-down control of the SC by the DLPFC may be mediated by beta oscillations. |
Steve W. C. Chang; Nicholas A. Fagan; Koji Toda; Amanda V. Utevsky; John M. Pearson; Michael L. Platt Neural mechanisms of social decision-making in the primate amygdala Journal Article In: Proceedings of the National Academy of Sciences, vol. 112, no. 52, pp. 16012–16017, 2015. @article{Chang2015, SignificanceMaking social decisions requires evaluation of benefits and costs to self and others. Long associated with emotion and vigilance, neurons in primate amygdala also signal reward and punishment as well as information about the faces and eyes of others. Here we show that neurons in the basolateral amygdala signal the value of rewards for self and others when monkeys make social decisions. These value-mirroring neurons reflected monkeys tendency to make prosocial decisions on a momentary as well as long-term basis. We also found that delivering the social peptide oxytocin into basolateral amygdala enhances both prosocial tendencies and attention to the recipients of prosocial decisions. Our findings endorse the amygdala as a critical neural nexus regulating social decisions. Social decisions require evaluation of costs and benefits to oneself and others. Long associated with emotion and vigilance, the amygdala has recently been implicated in both decision-making and social behavior. The amygdala signals reward and punishment, as well as facial expressions and the gaze of others. Amygdala damage impairs social interactions, and the social neuropeptide oxytocin (OT) influences human social decisions, in part, by altering amygdala function. Here we show in monkeys playing a modified dictator game, in which one individual can donate or withhold rewards from another, that basolateral amygdala (BLA) neurons signaled social preferences both across trials and across days. BLA neurons mirrored the value of rewards delivered to self and others when monkeys were free to choose but not when the computer made choices for them. We also found that focal infusion of OT unilaterally into BLA weakly but significantly increased both the frequency of prosocial decisions and attention to recipients for context-specific prosocial decisions, endorsing the hypothesis that OT regulates social behavior, in part, via amygdala neuromodulation. Our findings demonstrate both neurophysiological and neuroendocrinological connections between primate amygdala and social decisions. |
Philippe Chassy; Trym A. E. Lindell; Jessica A. Jones; Galina V. Paramei A relationship between visual complexity and aesthetic appraisal of car front images: An eye-tracker study Journal Article In: Perception, vol. 44, no. 8-9, pp. 1085–1097, 2015. @article{Chassy2015, Image aesthetic pleasure (AP) is conjectured to be related to image visual complexity (VC). The aim of the present study was to investigate whether (a) two image attributes, AP and VC, are reflected in eye-movement parameters; and (b) subjective measures of AP and VC are related. Participants (N=26) explored car front images (M=50) while their eye movements were recorded. Following image exposure (10seconds), its VC and AP were rated. Fixation count was found to positively correlate with the subjective VC and its objective proxy, JPEG compression size, suggesting that this eye-movement parameter can be considered an objective behavioral measure of VC. AP, in comparison, positively correlated with average dwelling time. Subjective measures of AP and VC were related too, following an inverted U-shape function best-fit by a quadratic equation. In addition, AP was found to be modulated by car prestige. Our findings reveal a close relationship between subjective and objective measures of complexity and aesthetic appraisal, which is interpreted within a prototype-based theory framework. |
Magdalena Chechlacz; Glyn W. Humphreys; Stamatios N. Sotiropoulos; Christopher Kennard; Dario Cazzoli Structural organization of the corpus callosum predicts attentional shifts after continuous theta burst stimulation Journal Article In: Journal of Neuroscience, vol. 35, no. 46, pp. 15353–15368, 2015. @article{Chechlacz2015, Repetitive transcranial magnetic stimulation (rTMS) applied over the right posterior parietal cortex (PPC) in healthy participants has been shown to trigger a significant rightward shift in the spatial allocation of visual attention, temporarily mimicking spatial deficits observed in neglect. In contrast, rTMS applied over the left PPC triggers a weaker or null attentional shift. However, large interindividual differences in responses to rTMS have been reported. Studies measuring changes in brain activation suggest that the effects of rTMS may depend on both interhemispheric and intrahemispheric interactions between cortical loci controlling visual attention. Here, we investigated whether variability in the structural organization of human white matter pathways subserving visual attention, as assessed by diffusion magnetic resonance imaging and tractography, could explain interindividual differences in the effects of rTMS. Most participants showed a rightward shift in the allocation of spatial attention after rTMS over the right intraparietal sulcus (IPS), but the size of this effect varied largely across participants. Conversely, rTMS over the left IPS resulted in strikingly opposed individual responses, with some participants responding with rightward and some with leftward attentional shifts. We demonstrate that microstructural and macrostructural variability within the corpus callosum, consistent with differential effects on cross-hemispheric interactions, predicts both the extent and the direction of the response to rTMS. Together, our findings suggest that the corpus callosum may have a dual inhibitory and excitatory function in maintaining the interhemispheric dynamics that underlie the allocation of spatial attention. |