All EyeLink Eye Tracker Publications
All 13,000+ peer-reviewed EyeLink research publications up until 2024 (with some early 2025s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2017 |
Michael P. Mansbridge; Katsuo Tamaoka; Kexin Xiong; Rinus G. Verdonschot Ambiguity in the processing of Mandarin Chinese relative clauses: One factor cannot explain it all Journal Article In: PLoS ONE, vol. 12, no. 6, pp. e0178369, 2017. @article{Mansbridge2017, This study addresses the question of whether native Mandarin Chinese speakers process and comprehend subject-extracted relative clauses (SRC) more readily than object-extracted relative clauses (ORC) in Mandarin Chinese. Presently, this has been a hotly debated issue, with various studies producing contrasting results. Using two eye-tracking experiments with ambiguous and unambiguous RCs, this study shows that both ORCs and SRCs have different processing requirements depending on the locus and time course during reading. The results reveal that ORC reading was possibly facilitated by linear/temporal integration and canonicity. On the other hand, similarity-based interference made ORCs more difficult, and expectation-based processing was more prominent for unambiguous ORCs. Overall, RC processing in Mandarin should not be broken down to a single ORC (dis)advantage, but understood as multiple interdependent factors influencing whether ORCs are either more difficult or easier to parse depending on the task and context at hand. |
Leonardo Martin; Anthony Tapper; David A. Gonzalez; Michelle Leclerc; Ewa Niechwiej-Szwedo The effects of task-relevant saccadic eye movements performed during the encoding of a serial sequence on visuospatial memory performance Journal Article In: Experimental Brain Research, vol. 235, no. 5, pp. 1519–1529, 2017. @article{Martin2017, Visuospatial working memory (VSWM) is a set of cognitive processes used to encode, maintain and manipulate spatial information. One important feature of VSWM is that it has a limited capacity such that only few items can be actively stored and manipulated simultaneously. Given the limited capacity, it is important to determine the conditions that affect memory performance as this will improve our understanding of the architecture and function of VSWM. Previous studies have shown that VSWM is disrupted when task-irrelevant eye movements are performed during the maintenance phase; however, relatively fewer studies examined the role of eye movements performed during the encoding phase. On one hand, performing eye movements during the encoding phase could result in a stronger memory trace because the memory formation is reinforced by the activation of the motor system. On the other hand, performing eye movements to each target could disrupt the configural processing of the spatial array because the spatial representation has to be updated with each movement to maintain perceptual stability. Therefore, this work was conducted to examine whether task-relevant saccadic eye movements performed during the encoding phase of a visuospatial working memory task affect the recall of serially presented targets. Results from two experiments showed that average recall accuracy was significantly higher when the spatial array (set size ≥ 7) was encoded using a covert strategy-that is, while participants fixated on a central target, in comparison to an overt strategy-that is, while participants moved their eyes to fixate on each target. Furthermore, the improvement in accuracy was evident only for targets presented in the first half of the sequence, suggesting that the primacy effect is modulated by the presence of eye movements. We propose that executing saccades during encoding could interfere with the ability to use a chunking strategy or disrupt active visualization of the configuration. In conclusion, this is the first study to show that task-relevant saccadic eye movements performed during encoding may actually reduce the spatial span of VSWM. These results extend the current knowledge about the role of eye movements in VSWM, and have implications for future studies investigating the VSWM. |
Jun Maruta; Peter Modera; Umesh Rajashekar; Lisa A. Spielman; Jamshid Ghajar Frequency responses to visual tracking stimuli may be affected by concussion Journal Article In: Military Medicine, vol. 182, no. 3-4, pp. 120–123, 2017. @article{Maruta2017a, Human visual tracking performance is known to be reduced with an increase of the target's speed and oscillation frequency, but changes in brain states following a concussion may alter these frequency responses. The goal of this study was to characterize and compare frequency-dependent smooth pursuit velocity degradation in normal subjects and patients who had chronic postconcussion symptoms, and also examine cases of acutely concussed patients. Eye movements were recorded while subjects tracked a target that moved along a circular trajectory of 10° radius at 0.33, 0.40, or 0.67 Hz. Performance was characterized by the gain of smooth pursuit velocity, with reduced gain indicating reduced performance. The difference between normal and chronic patient groups in the pattern of decrease in the gain of horizontal smooth pursuit velocity as a function of the stimulus frequency reflected patients performing more poorly than normal subjects at 0.4 Hzwhile both groups performing similarly at 0.33 or 0.67 Hz. The performance of acute patients may represent yet another type of frequency response. The findings suggest that there may be ranges of stimulus frequencies that differentiate the effects of concussion from normal individuals. |
Jun Maruta; Lisa A. Spielman; Umesh Rajashekar; Jamshid Ghajar Visual tracking in development and aging Journal Article In: Frontiers in Neurology, vol. 8, pp. 640, 2017. @article{Maruta2017, A moving target is visually tracked with a combination of smooth pursuit and saccades. Human visual tracking eye movement develops through early childhood and adolescence, and declines in senescence. However, the knowledge regarding performance changes over the life course is based on data from distinct age groups in isolation using different procedures, and thus is fragmented. We sought to describe the age-dependence of visual tracking performance across a wide age range and compare it to that of simple visuo-manual reaction time. We studied a cross-sectional sample of 143 subjects aged 7-82 years old (37% male). Eye movements were recorded using video-oculography, while subjects viewed a computer screen and tracked a small target moving along a circular trajectory at a constant speed. For simple reaction time (SRT) measures, series of key presses that subjects made in reaction to cue presentation on a computer monitor were recorded using a standard software. The positional precision and smooth pursuit velocity gain of visual tracking followed a U-shaped trend over age, with best performances achieved between the ages of 20 and 50 years old. A U-shaped trend was also found for mean reaction time in agreement with the existing literature. Inter-individual variability was evident at any age in both visual tracking and reaction time metrics. Despite the similarity in the overall developmental and aging trend, correlations were not found between visual tracking and reaction time performances after subtracting the effects of age. Furthermore, while a statistically significant difference between the sexes was found for mean SRT in the sample, a similar difference was not found for any of the visual tracking metrics. Therefore, the cognitive constructs and their neural substrates supporting visual tracking and reaction time performances appear largely independent. In summary, age is an important covariate for visual tracking performance, especially for a pediatric population. Since visual tracking performance metrics may provide signatures of abnormal neurological or cognitive states independent of reaction time-based metrics, further understanding of age-dependent variations in normal visual tracking behavior is necessary. |
Jun Maruta; Lisa A. Spielman; Irene D. Tseretopoulos; Adrienne Hezghia; Jamshid Ghajar Possible medication-resistant deficits in adult ADHD Journal Article In: Journal of Attention Disorders, vol. 21, no. 14, pp. 1169–1179, 2017. @article{Maruta2017b, OBJECTIVE: The nature of ADHD, especially in adulthood, is not well-understood. Therefore, we explored subcomponents of attention in adult ADHD. METHOD: Twenty-three adults with ADHD were tested on neurocognitive and visual tracking performance both while on their regular prescription stimulant medication and while abstaining from the medication for 1 day. Pairwise comparisons to 46 two-for-one matched normal controls were made to detect medication-resistant effects of ADHD, and within-participant comparisons were made to detect medication-sensitive effects in patients. RESULTS: Even when on medication, patients performed more poorly than controls on a spatial working memory task, and on visual tracking and simple reaction time tasks immediately following other attention-demanding tasks. Patients' visual tracking performance degraded while off-medication in a manner consistent with reduced vigilance. CONCLUSION: There may be persistent cognitive impairments in adult ADHD despite medication. In addition, the benefit of stimulants seems reduced under cognitive fatigue. |
Christina Marx; Stefan Hawelka; Sarah Schuster; Florian Hutzler Foveal processing difficulty does not affect parafoveal preprocessing in young readers Journal Article In: Scientific Reports, vol. 7, pp. 41602, 2017. @article{Marx2017, Recent evidence suggested that parafoveal preprocessing develops early during reading acquisition, that is, young readers profit from valid parafoveal information and exhibit a resultant preview benefit. For young readers, however, it is unknown whether the processing demands of the currently fixated word modulate the extent to which the upcoming word is parafoveally preprocessed - as it has been postulated (for adult readers) by the foveal load hypothesis. The present study used the novel incremental boundary technique to assess whether 4 th and 6 th Graders exhibit an effect of foveal load. Furthermore, we attempted to distinguish the foveal load effect from the spillover effect. These effects are hard to differentiate with respect to the expected pattern of results, but are conceptually different. The foveal load effect is supposed to reflect modulations of the extent of parafoveal preprocessing, whereas the spillover effect reflects the ongoing processing of the previous word whilst the reader's fixation is already on the next word. The findings revealed that the young readers did not exhibit an effect of foveal load, but a substantial spillover effect. The implications for previous studies with adult readers and for models of eye movement control in reading are discussed. |
Anna Marzecová; Andreas Widmann; Iria SanMiguel; Sonja A. Kotz; Erich Schröger Interrelation of attention and prediction in visual processing: Effects of task-relevance and stimulus probability Journal Article In: Biological Psychology, vol. 125, pp. 76–90, 2017. @article{Marzecova2017, The potentially interactive influence of attention and prediction was investigated by measuring event-related potentials (ERPs) in a spatial cueing task with attention (task-relevant) and prediction (probabilistic) cues. We identified distinct processing stages of this interactive influence. Firstly, in line with the attentional gain hypothesis, a larger amplitude response of the contralateral N1, and Nd1 for attended gratings was observed. Secondly, conforming to the attenuation-by-prediction hypothesis, a smaller negativity in the time window directly following the peak of the N1 component for predicted compared to unpredicted gratings was observed. In line with the hypothesis that attention and prediction interface, unpredicted/unattended stimuli elicited a larger negativity at central-parietal sites, presumably reflecting an increased prediction error signal. Thirdly, larger P3 responses to unpredicted stimuli pointed to the updating of an internal model. Attention and prediction can be considered as differentiated mechanisms that may interact at different processing stages to optimise perception. |
Yousri Marzouki; Valériane Dusaucy; Myriam Chanceaux; Sebastiaan Mathôt The World (of Warcraft) through the eyes of an expert Journal Article In: PeerJ, vol. 5, pp. 1–21, 2017. @article{Marzouki2017, Negative correlations between pupil size and the tendency to look at salient locations were found in recent studies (e.g., Mathôt et al., 2015). It is hypothesized that this negative correlation might be explained by the mental effort put by participants in the task that leads in return to pupil dilation. Here we present an exploratory study on the effect of expertise on eye-movement behavior. Because there is no available standard tool to evaluate WoW players' expertise, we built an off-game questionnaire testing players' knowledge about WoW and acquired skills through completed raids, highest rated battlegrounds, Skill Points, etc. Experts ( N = 4) and novices ( N = 4) in the massively multiplayer online role-playing game World of Warcraft (WoW) viewed 24 designed video segments from the game that differ in regards with their content (i.e, informative locations) and visual complexity (i.e, salient locations). Consistent with previous studies, we found a negative correlation between pupil size and the tendency to look at salient locations (experts |
Nicolas Y. Masse; Jonathan M. Hodnefield; David J. Freedman Mnemonic encoding and cortical organization in parietal and prefrontal cortices Journal Article In: Journal of Neuroscience, vol. 37, no. 25, pp. 6098–6112, 2017. @article{Masse2017, Persistent activity within the frontoparietal network is consistently observed during tasks that require working memory. However, the neural circuit mechanisms underlying persistent neuronal encoding within this network remain unresolved. Here, we ask how neural circuits support persistent activity by examining population recordings from posterior parietal (PPC) and prefrontal (PFC) cortices in two male monkeys that performed spatial and motion direction-based tasks that required working memory. While spatially selective persistent activity was observed in both areas, robust selective persistent activity for motion direction was only observed in PFC. Crucially, we find that this difference between mnemonic encoding in PPC and PFC is associated with the presence offunctional clustering: PPC and PFC neurons up to ~700 μm apart preferred similar spatial locations, and PFC neurons up to ~700 μm apart preferred similar motion directions. In contrast, motion-direction tuning similarity between nearby PPC neurons was much weaker and decayed rapidly beyond ~200 μm. We also observed a similar association between persistent activity and functional clustering in trained recurrent neural network models embedded with a columnar topology. These results suggest that functional clustering facilitates mnemonic encoding of sensory information. |
Nicolas Masson; Mauro Pesenti; Valérie Dormal Impact of optokinetic stimulation on mental arithmetic Journal Article In: Psychological Research, vol. 81, no. 4, pp. 840–849, 2017. @article{Masson2017, Solving arithmetic problems has been shown to induce shifts of spatial attention, subtraction problems orienting attention to the left side, and addition problems to the right side of space. At the neurofunctional level, the activations elicited by the solving of arithmetical problems resemble those elicited by horizontal eye movements. Whether overt orientation of attention (i.e., eye movements) can be linked to the solving procedure is, however, still under debate. In the present study, we used optokinetic stimulation (OKS) to trigger automatic eye movements to orient participants' overt attention to the right or to the left of their visual field while they were solving addition or subtraction problems. The results show that, in comparison to leftward OKS and a control condition, rightward OKS facilitates the solving of addition problems that necessitate a carrying procedure. Subtraction solving was unaffected by leftward or rightward OKS. These results converge with previous findings to show that attentional shifts are functionally related to mental arithmetic processing |
Kaitlin E. W. Laidlaw; Alan Kingstone Fixations to the eyes aids in facial encoding; covertly attending to the eyes does not Journal Article In: Acta Psychologica, vol. 173, pp. 55–65, 2017. @article{Laidlaw2017, When looking at images of faces, people will often focus their fixations on the eyes. It has previously been demonstrated that the eyes convey important information that may improve later facial recognition. Whether this advantage requires that the eyes be fixated, or merely attended to covertly (i.e. while looking elsewhere), is unclear from previous work. While attending to the eyes covertly without fixating them may be sufficient, the act of using overt attention to fixate the eyes may improve the processing of important details used for later recognition. In the present study, participants were shown a series of faces and, in Experiment 1, asked to attend to them normally while avoiding looking at either the eyes or, as a control, the mouth (overt attentional avoidance condition); or in Experiment 2 fixate the center of the face while covertly attending to either the eyes or the mouth (covert attention condition). After the first phase, participants were asked to perform an old/new face recognition task. We demonstrate that a) when fixations to the eyes are avoided during initial viewing then subsequent face discrimination suffers, and b) covert attention to the eyes alone is insufficient to improve face discrimination performance. Together, these findings demonstrate that fixating the eyes provides an encoding advantage that is not availed by covert attention alone. |
Elke B. Lange; Fabian Zweck; Petra Sinn Microsaccade-rate indicates absorption by music listening Journal Article In: Consciousness and Cognition, vol. 55, pp. 59–78, 2017. @article{Lange2017, The power of music is a literary topos, which can be attributed to intense and personally significant experiences, one of them being the state of absorption. Such phenomenal states are difficult to grasp objectively. We investigated the state of musical absorption by using eye tracking. We utilized a load related definition of state absorption: multimodal resources are committed to create a unified representation of music. Resource allocation was measured indirectly by microsaccade rate, known to indicate cognitive processing load. We showed in Exp. 1 that microsaccade rate also indicates state absorption. Hence, there is cross-modal coupling between an auditory aesthetic experience and fixational eye movements. When removing the fixational stimulus in Exp. 2, saccades are no longer generated upon visual input and the cross-modal coupling disappeared. Results are interpreted in favor of the load hypothesis of microsaccade rate and against the assumption of general slowing by state absorption. |
Ryan W. Langridge; Jonathan J. Marotta In: Experimental Brain Research, vol. 235, no. 9, pp. 2705–2716, 2017. @article{Langridge2017, Participants executed right-handed reach-to-grasp movements toward horizontally translating targets. Visual feedback of the target when reaching, as well as the presence of additional cues placed above and below the target's path, was manipulated. Comparison of average fixations at reach onset and at the time of the grasp suggested that participants accurately extrapolated the occluded target's motion prior to reach onset, but not after the reach had been initiated, resulting in inaccurate grasp placements. Final gaze and grasp positions were more accurate when reaching for leftward moving targets, suggesting individuals use different grasp strategies when reaching for targets traveling away from the reaching hand. Additional cue presence appeared to impair participants' ability to extrapolate the disappeared target's motion, and caused grasps for occluded targets to be less accurate. Novel information is provided about the eye-hand strategies used when reaching for moving targets in unpredictable visual conditions. |
S. J. Larcombe; Christopher Kennard; H. Bridge Time course influences transfer of visual perceptual learning across spatial location Journal Article In: Vision Research, vol. 135, pp. 26–33, 2017. @article{Larcombe2017, Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. |
Thibaut Le Naour; Jean-Pierre Bresciani A skeleton-based approach to analyze and visualize oculomotor behavior when viewing znimated characters Journal Article In: Journal of Eye Movement Research, vol. 10, no. 5, pp. 1–19, 2017. @article{LeNaour2017, Knowing what people look at and understanding how they analyze the dynamic gestures of their peers is an exciting challenge. In this context, we propose a new approach to quan- tifying and visualizing the oculomotor behavior of viewers watching the movements of animated characters in dynamic sequences. Using this approach, we were able to illustrate, on a 'heat mesh', the gaze distribution of one or several viewers, i.e., the time spent on each part of the body, and to visualize viewers' timelines, which are linked to the heat mesh. Our approach notably provides an 'intuitive' overview combining the spatial and temporal characteristics of the gaze pattern, thereby constituting an efficient tool for quickly comparing the oculomotor behaviors of different viewers. The functionalities of our system are illustrated through two use case experiments with 2D and 3D animated media sources, respectively. |
Matthew L. Leavitt; Florian Pieper; Adam J. Sachs; Julio C. Martinez-Trujillo Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles Journal Article In: Proceedings of the National Academy of Sciences, vol. 114, no. 12, pp. E2494–E2503, 2017. @article{Leavitt2017a, Neurons in the primate lateral prefrontal cortex (LPFC) encode working memory (WM) representations via sustained firing, a phenomenon hypothesized to arise from recurrent dynamics within ensembles of interconnected neurons. Here, we tested this hypothesis by using microelectrode arrays to examine spike count correlations (rsc) in LPFC neuronal ensembles during a spatial WM task. We found a pattern of pairwise rsc during WM maintenance indicative of stronger coupling between similarly tuned neurons and increased inhibition between dissimilarly tuned neurons. We then used a linear decoder to quantify the effects of the high-dimensional rsc structure on information coding in the neuronal ensembles. We found that the rsc structure could facilitate or impair coding, depending on the size of the ensemble and tuning properties of its constituent neurons. A simple optimization procedure demonstrated that near-maximum decoding performance could be achieved using a relatively small number of neurons. These WM- optimized subensembles were more signal correlation (rsignal)- diverse and anatomically dispersed than predicted by the statistics of the full recorded population of neurons, and they often con- tained neurons that were poorly WM-selective, yet enhanced cod- ing fidelity by shaping the ensemble's rsc structure. We observed a pattern of rsc between LPFC neurons indicative of recurrent dynamics as a mechanism for WM-related activity and that the rsc structure can increase the fidelity ofWM representations. Thus, WM coding in LPFC neuronal ensembles arises from a complex synergy between single neuron coding properties and multidimensional, ensemble-level phenomena. |
Jeongmi Lee; Joy J. Geng Idiosyncratic patterns of representational similarity in prefrontal cortex predict attentional performance Journal Article In: Journal of Neuroscience, vol. 37, no. 5, pp. 1257–1268, 2017. @article{Lee2017a, The efficiency of finding an object in a crowded environment depends largely on the similarity of nontargets to the search target. Models of attention theorize that the similarity is determined by representations stored within an "attentional template" held in working memory. However, the degree to which the contents of the attentional template are individually unique and where those idiosyncratic representations are encoded in the brain are unknown. We investigated this problem using representational similarity analysis of human fMRI data to measure the common and idiosyncratic representations of famous face morphs during an identity categorization task; data from the categorization task were then used to predict performance on a separate identity search task. We hypothesized that the idiosyncratic categorical representations of the continuous face morphs would predict their distractability when searching for each target identity. The results identified that patterns of activation in the lateral prefrontal cortex (LPFC) as well as in face-selective areas in the ventral temporal cortex were highly correlated with the patterns of behavioral categorization of face morphs and search performance that were common across subjects. However, the individually unique components of the categorization behavior were reliably decoded only in right LPFC. Moreover, the neural pattern in right LPFC successfully predicted idiosyncratic variability in search performance, such that reaction times were longer when distractors had a higher probability of being categorized as the target identity. These results suggest that the prefrontal cortex encodes individually unique components of categorical representations that are also present in attentional tem-plates for target search. |
Jeyeon Lee; Hoseok Choi; Seho Lee; Baek Hwan Cho; Kyoung-ha Ahn; In Young Kim; Kyoung-Min Lee; Dong-Pyo Jang Decoding saccadic directions using epidural ECoG in non-human primates Journal Article In: Journal of Korean Medical Science, vol. 32, no. 8, pp. 1243–1250, 2017. @article{Lee2017, A brain-computer interface (BCI) can be used to restore some communication as an alternative interface for patients suffering from locked-in syndrome. However, most BCI systems are based on SSVEP, P300, or motor imagery, and a diversity of BCI protocols would be needed for various types of patients. In this paper, we trained the choice saccade (CS) task in 2 non-human primate monkeys and recorded the brain signal using an epidural electrocorticogram (eECoG) to predict eye movement direction. We successfully predicted the direction of the upcoming eye movement using a support vector machine (SVM) with the brain signals after the directional cue onset and before the saccade execution. The mean accuracies were 80% for 2 directions and 43% for 4 directions. We also quantified the spatial-spectro-temporal contribution ratio using SVM recursive feature elimination (RFE). The channels over the frontal eye field (FEF), supplementary eye field (SEF), and superior parietal lobule (SPL) area were dominantly used for classification. The α-band in the spectral domain and the time bins just after the directional cue onset and just before the saccadic execution were mainly useful for prediction. A saccade based BCI paradigm can be projected in the 2D space, and will hopefully provide an intuitive and convenient communication platform for users. |
Jiyeon Lee Time course of lexicalization during sentence production in Parkinson's Disease: Eye-tracking while speaking Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 60, no. 4, pp. 924–936, 2017. @article{Lee2017b, Purpose: Growing evidence suggests that sentence formulation is affected in Parkinson's disease (PD); however, how speakers with PD coordinate sentence planning and speaking remains unclear. Within 2 competing models of sentence production, this study examined whether speakers with PD show advanced buffering of words to minimize disfluencies and increased demands during speech or whether they plan one word at a time, compromising accuracy and fluency of speech. Method: Participants described 3 computer-displayed pictures using the sentence "the A and the B are above the C." Name agreement (codability) was varied to be high (clock) or low (sofa/couch) for each object position (A, B, C), affecting difficulty of lexical selection. Participants' gaze durations to each object were recorded. Results: Speakers with PD showed incremental word-by-word planning, retrieving only the first lexical item (A) before speech onset, similar to controls. However, they produced greater word-finding errors and disfluencies compared to controls for the low-codable pictures, but not for high-codable pictures. Conclusions: These findings suggest that by following word-by-word incremental production, speakers with PD compromise fluency and accuracy of speech to a greater extent than healthy older speakers and that PD is associated with impaired inhibitory control during lexical selection. |
Matthew W. Lowder; Peter C. Gordon Print exposure modulates the effects of repetition priming during sentence reading Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 6, pp. 1935–1942, 2017. @article{Lowder2017, Individual readers vary greatly in the quality of their lexical representations, and consequently in how quickly and efficiently they can access orthographic and lexical knowledge. This variability may be explained, at least in part, by individual differences in exposure to printed language, because practice at reading promotes the development of stronger reading skills. In the present eyetracking experiment, we tested the hypothesis that the efficiency of word recognition during reading improves with increases in print exposure, by determining whether the magnitude of the repetition-priming effect is modulated by individual differences in scores on the author recognition test (ART). Lexical repetition of target words was manipulated across pairs of unrelated sentences that were presented on consecutive trials. The magnitude of the repetition effect was modulated by print exposure in early measures of processing, such that the magnitude of the effect was inversely related to scores on the ART. The results showed that low levels of print exposure, and thus lower-quality lexical representations, are associated with high levels of difficulty recognizing words, and thus with the greatest room to benefit from repetition. Furthermore, the interaction between scores on the ART and repetition suggests that print exposure is not simply an index of general reading speed, but rather that higher levels of print exposure are associated with an enhanced ability to access lexical knowledge and recognize words during reading. |
Jia E. Loy; Hannah Rohde; Martin Corley Effects of disfluency in online interpretation of deception Journal Article In: Cognitive Science, vol. 41, pp. 1434–1456, 2017. @article{Loy2017, A speaker's manner of delivery of an utterance can affect a listener's pragmatic interpretation of the message. Disfluencies (such as filled pauses) influence a listener's off-line assessment of whether the speaker is truthful or deceptive. Do listeners also form this assessment during the moment-by-moment processing of the linguistic message? Here we present two experiments that examined listeners' judgments of whether a speaker was indicating the true location of the prize in a game during fluent and disfluent utterances. Participants' eye and mouse movements were biased toward the location named by the speaker during fluent utterances, whereas the opposite bias was observed during disfluent utterances. This difference emerged rapidly after the onset of the critical noun. Participants were similarly sensitive to disfluencies at the start of the utterance (Experiment 1) and in the middle (Experiment 2). Our findings support recent research showing that listeners integrate pragmatic information alongside semantic content during the earliest moments of language processing. Unlike prior work which has focused on pragmatic effects in the interpretation of the literal message, here we highlight disfluency's role in guiding a listener to an alternative non-literal message. |
Jiachen Lu; Lili Tian; Jiafeng Zhang; Jing Wang; Chaoxiong Ye; Qiang Liu Strategic inhibition of distractors with visual working memory contents after involuntary attention capture Journal Article In: Scientific Reports, vol. 7, pp. 16314, 2017. @article{Lu2017, Previous research has suggested that visual working memory (VWM) contents had a guiding effect on selective attention, and once participants realized that the distractors shared the same information with VWM contents in the search task, they would strategically inhibit the potential distractors with VWM contents. However, previous behavioral studies could not reveal the way how distractors with VWM contents are inhibited strategically. By employing the eye-tracking technique and a dual-task paradigm, we manipulated the probability of memory items occurring as distractors to explore this issue. Consistent with previous behavioral studies, the results showed that the inhibitory effect occurred only in the high-probability condition, while the guiding effect emerged in the low-probability condition. More importantly, the eye-movement results indicated that in the high-probability condition, once few (even one) distractors with VWM contents were captured at first, all the remaining distractors with VWM contents would be rejected as a whole. However, in the low-probability condition, attention could be captured by the majority of distractors with VWM contents. These results suggested that the guiding effect of VWM contents on attention is involuntary in the early stage of visual search. After the completion of this involuntary stage, the guiding effect of task-irrelevant VWM contents on attention could be strategically controlled. |
Rachel G. Lucas-Thompson; Adina Dumitrache; Amy Quinn Sparks Appraisals of interparental conflict and change in attention to emotion after exposure to marital conflict Journal Article In: Journal of Child and Family Studies, vol. 26, no. 8, pp. 2175–2181, 2017. @article{LucasThompson2017, The goal of this study was to investigate whether exposure to marital conflict changes patterns of attention to anger and happiness, as well as whether those patterns vary based on appraisals of the history of interparental conflict in the home. Emerging adults viewed photo pairs with one emotionally-neutral photo and another photo depicting a happy/angry emotional interaction (while a high-speed camera tracked gaze), were randomly assigned to view a neutral or marital conflict recording, viewed neutral-emotional photo pairs again, and then reported their appraisals of their parents' conflict. Results indicated that feeling threatened by and to blame for parental conflict predicted avoidance of happy emotions at baseline. Although there were no significant changes in attention to emotion overall based on condition, self-blame for interparental conflict predicted greater increases in time spent looking at anger after watching marital conflict (but not after watching the neutral recording). These results indicate that differences in attention to emotion may be one mechanism linking parental conflict to anxiety that could be the focus of prevention/intervention efforts to reduce anxiety symptoms in those from high-conflict homes. |
Casimir J. H. Ludwig; David R. Evens Information foraging for perceptual decisions Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 2, pp. 245–264, 2017. @article{Ludwig2017, We tested an information foraging framework to characterize the mechanisms that drive active (visual) sampling behavior in decision problems that involve multiple sources of information. Experiments 1 through 3 involved participants making an absolute judgment about the direction of motion of a single random dot motion pattern. In Experiment 4, participants made a relative comparison between 2 motion patterns that could only be sampled sequentially. Our results show that: (a) Information (about noisy motion information) grows to an asymptotic level that depends on the quality of the information source; (b) The limited growth is attributable to unequal weighting of the incoming sensory evidence, with early samples being weighted more heavily; (c) Little information is lost once a new source of information is being sampled; and (d) The point at which the observer switches from 1 source to another is governed by online monitoring of his or her degree of (un)certainty about the sampled source. These findings demonstrate that the sampling strategy in perceptual decision-making is under some direct control by ongoing cognitive processing. More specifically, participants are able to track a measure of (un)certainty and use this information to guide their sampling behavior. |
Shijian Luo; Yi Hu; Yuxiao Zhou Factors attracting Chinese Generation Y in the smartphone application marketplace Journal Article In: Frontiers of Computer Science, vol. 11, no. 2, pp. 290–306, 2017. @article{Luo2017, Smartphone applications (apps) are becoming increasingly popular all over the world, particularly in the Chinese Generation Y population; however, surprisingly, only a small number of studies on app factors valued by this important group have been conducted. Because the competition among app developers is increasing, app factors that attract users' attention are worth studying for sales promotion. This paper examines these factors through two separate studies. In the first study, i.e., Experiment 1, which consists of a survey, perceptual rating and verbal protocol methods are employed, and 90 randomly selected app websites are rated by 169 experienced smartphone users according to app attraction. Twelve of the most rated apps (six highest rated and six lowest rated) are selected for further investigation, and 11 influential factors that Generation Y members value are listed. A second study, i.e., Experiment 2, is conducted using the most and least rated app websites from Experiment 1, and eye tracking and verbal protocol methods are used. The eye movements of 45 participants are tracked while browsing these websites, providing evidence about what attracts these users' attention and the order in which the app components are viewed. The results of these two studies suggest that Chinese Generation Y is a content-centric group when they browse the smartphone app marketplace. Icon, screenshot, price, rating, and name are the dominant and indispensable factors that influence purchase intentions, among which icon and screenshot should be meticulously designed. Price is another key factor that drives Chinese Generation Y's attention. The recommended apps are the least dominant element. Design suggestions for app websites are also proposed. This research has important implications. |
Philipp N. Hesse; Frank Bremmer The SNARC effect in two dimensions: Evidence for a frontoparallel mental number plane Journal Article In: Vision Research, vol. 130, pp. 85–96, 2017. @article{Hesse2017a, The existence of an association between numbers and space is known for a long time. The most prominent demonstration of this relationship is the spatial numerical association of response codes (SNARC) effect, describing the fact that participants' reaction times are shorter with the left hand for small numbers and with the right hand for large numbers, when being asked to judge the parity of a number (Dehaene et al., J. Exp. Psychol., 122, 371–396, 1993). The SNARC effect is commonly seen as support for the concept of a mental number line, i.e. a mentally conceived line where small numbers are represented more on the left and large numbers are represented more on the right. The SNARC effect has been demonstrated for all three cardinal axes and recently a transverse SNARC plane has been reported (Chen et al., Exp. Brain Res., 233(5), 1519–1528, 2015). Here, by employing saccadic responses induced by auditory or visual stimuli, we measured the SNARC effect within the same subjects along the horizontal (HM) and vertical meridian (VM) and along the two interspersed diagonals. We found a SNARC effect along HM and VM, which allowed predicting the occurrence of a SNARC effect along the two diagonals by means of linear regression. Importantly, significant differences in SNARC strength were found between modalities. Our results suggest the existence of a frontoparallel mental number plane, where small numbers are represented left and down, while large numbers are represented right and up. Together with the recently described transverse mental number plane our findings provide further evidence for the existence of a three-dimensional mental number space. |
Philipp N. Hesse; Constanze Schmitt; Steffen Klingenhoefer; Frank Bremmer Preattentive processing of numerical visual information Journal Article In: Frontiers in Human Neuroscience, vol. 11, pp. 70, 2017. @article{Hesse2017, Humans can perceive and estimate approximate numerical information, even when accurate counting is impossible e.g. due to short presentation time. If the number of objects to be estimated is small, typically around one to four items, observers are able to give very fast and precise judgments with high confidence – an effect that is called subitizing. Due to its speed and effortless nature subitizing has usually been assumed to be preattentive, putting it into the same category as other low level visual features like color or orientation. More recently, however, a number of studies have suggested that subitizing might be dependent on attentional resources. In our current study we investigated the potentially preattentive nature of visual numerical perception in the subitizing range by means of EEG. We presented peripheral, task irrelevant sequences of stimuli consisting of a certain number of circular patches while participants were engaged in a demanding, non-numerical detection task at the fixation point drawing attention away from the number stimuli. Within a sequence of stimuli of a given number of patches (called ‘standards') we interspersed some stimuli of different numerosity (‘oddballs'). We compared the evoked responses to visually identical stimuli that had been presented in two different conditions, serving as standard in one condition and as oddball in the other. We found significant visual mismatch negativity (vMMN) responses over parieto-occipital electrodes. In addition to the ERP analysis, we performed a time-frequency analysis to investigate whether the vMMN was accompanied by additional oscillatory processes. We found a concurrent increase in evoked theta power of similar strength over both hemispheres. Our results provide clear evidence for a preattentive processing of numerical visual information in the subitizing range. |
Roy S. Hessels; Diederick C. Niehorster; Chantal Kemner; Ignace T. C. Hooge Noise-robust fixation detection in eye movement data: Identification by two-means clustering (I2MC) Journal Article In: Behavior Research Methods, vol. 49, no. 5, pp. 1802–1823, 2017. @article{Hessels2017, Eye-tracking research in infants and older children has gained a lot of momentum over the last decades. Although eye-tracking research in these participant groups has become easier with the advance of the remote eye-tracker, this often comes at the cost of poorer data quality than in research with well-trained adults (Hessels, Andersson, Hooge, Nystrom, & Kemner Infancy, 20, 601-633, 2015; Wass, Forssman, & Leppanen Infancy, 19, 427-460, 2014). Current fixation detection algorithms are not built for data from infants and young children. As a result, some researchers have even turned to hand correction of fixation detections (Saez de Urabain, Johnson, & Smith Behavior Research Methods, 47, 53-72, 2015). Here we introduce a fixation detection algorithm-identification by two-means clustering (I2MC)-built specifically for data across a wide range of noise levels and when periods of data loss may occur. We evaluated the I2MC algorithm against seven state-of-the-art event detection algorithms, and report that the I2MC algorithm's output is the most robust to high noise and data loss levels. The algorithm is automatic, works offline, and is suitable for eye-tracking data recorded with remote or tower-mounted eye-trackers using static stimuli. In addition to application of the I2MC algorithm in eye-tracking research with infants, school children, and certain patient groups, the I2MC algorithm also may be useful when the noise and data loss levels are markedly different between trials, participants, or time points (e.g., longitudinal research). |
Markus A. Hietanen; Nicholas S. C. Price; Shaun L. Cloherty; Kostas Hadjidimitrakis; Michael R. Ibbotson Long-term sensorimotor adaptation in the ocular following system of primates Journal Article In: PLoS ONE, vol. 12, no. 12, pp. e0189030, 2017. @article{Hietanen2017, The sudden movement of a wide-field image leads to a reflexive eye tracking response referred to as short-latency ocular following. If the image motion occurs soon after a saccade the initial speed of the ocular following is enhanced, a phenomenon known as post-saccadic enhancement. We show in macaque monkeys that repeated exposure to the same stimulus regime over a period of months leads to progressive increases in the initial speeds of ocular following. The improvement in tracking speed occurs for ocular following with and without a prior saccade. As a result of the improvement in ocular following speeds, the influence of post-saccadic enhancement wanes with increasing levels of training. The improvement in ocular following speed following repeated exposure to the same oculomotor task represents a novel form of sensori-motor learning in the context of a reflexive movement. |
Anne P. Hillstrom; Joice D. Segabinazi; Hayward J. Godwin; Simon P. Liversedge; Valerie Benson Cat and mouse search: The influence of scene and object analysis on eye movements when targets change locations during search Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 372, pp. 1–9, 2017. @article{Hillstrom2017, We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search. |
Stephen J. Hinde; Tim J. Smith; Iain D. Gilchrist In search of oculomotor capture during film viewing: Implications for the balance of top-down and bottom-up control in the saccadic system Journal Article In: Vision Research, vol. 134, pp. 7–17, 2017. @article{Hinde2017, In the laboratory, the abrupt onset of a visual distractor can generate an involuntary orienting response: this robust oculomotor capture effect has been reported in a large number of studies (e.g. Ludwig & Gilchrist, 2002; Theeuwes, Kramer, Hahn, & Irwin, 1998) suggesting it may be a ubiquitous part of more natural visual behaviour. However the visual stimuli used in these experiments have tended to be static and had none of the complexity, and dynamism of more natural visual environments. In addition, the primary task in the laboratory (typically visual search) can be tedious for the participants with participant's losing interest and becoming stimulus driven and more easily distracted. Both of these factors may have led to an overestimation of the extent to which oculomotor capture occurs and the importance of this phenomena in everyday visual behaviour. To address this issue, in the current series of studies we presented abrupt and highly salient visual distractors away from fixation while participants watched a film. No evidence of oculomotor capture was found. However, the distractor does effect fixation duration: we find an increase in fixation duration analogous to the remote distractor effect (Walker, Deubel, Schneider, & Findlay, 1997). These results suggest that during dynamic scene perception, the oculomotor system may be under far more top-down control than traditional laboratory based-tasks have previously suggested. |
Florian Hintz; Antje S. Meyer; Falk Huettig Predictors of verb-mediated anticipatory eye movements in the visual world Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 9, pp. 1352–1374, 2017. @article{Hintz2017, Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we investigated the influence of 5 potential predictors of this behavior: functional associations and general associations between verb and target object, as well as the listeners' production fluency, receptive vocabulary knowledge, and nonverbal intelligence. In 3 eye-tracking experiments, participants looked at sets of 4 objects and listened to sentences where the final word was predictable or not predictable (e.g., "The man peels/draws an apple"). On predictable trials only the target object, but not the distractors, were functionally and associatively related to the verb. In Experiments 1 and 2, objects were presented before the verb was heard. In Experiment 3, participants were given a short preview of the display after the verb was heard. Functional associations and receptive vocabulary were found to be important predictors of verb-mediated anticipatory eye gaze independent of the amount of contextual visual input. General word associations did not and nonverbal intelligence was only a very weak predictor of anticipatory eye movements. Participants' production fluency correlated positively with the likelihood of anticipatory eye movements when participants were given the long but not the short visual display preview. These findings fit best with a pluralistic approach to predictive language processing in which multiple mechanisms, mediating factors, and situational context dynamically interact. |
Renske S. Hoedemaker; Jessica Ernst; Antje S. Meyer; Eva Belke Language production in a shared task: Cumulative Semantic Interference from self- and other-produced context words Journal Article In: Acta Psychologica, vol. 172, pp. 55–63, 2017. @article{Hoedemaker2017, This study assessed the effects of semantic context in the form of self-produced and other-produced words on subsequent language production. Pairs of participants performed a joint picture naming task, taking turns while naming a continuous series of pictures. In the single-speaker version of this paradigm, naming latencies have been found to increase for successive presentations of exemplars from the same category, a phenomenon known as Cumulative Semantic Interference (CSI). As expected, the joint-naming task showed a within-speaker CSI effect, such that naming latencies increased as a function of the number of category exemplars named previously by the participant (self-produced items). Crucially, we also observed an across-speaker CSI effect, such that naming latencies slowed as a function of the number of category members named by the participant's task partner (other-produced items). The magnitude of the across-speaker CSI effect did not vary as a function of whether or not the listening participant could see the pictures their partner was naming. The observation of across-speaker CSI suggests that the effect originates at the conceptual level of the language system, as proposed by Belke's (2013) Conceptual Accumulation account. Whereas self-produced and other-produced words both resulted in a CSI effect on naming latencies, post-experiment free recall rates were higher for self-produced than other-produced items. Together, these results suggest that both speaking and listening result in implicit learning at the conceptual level of the language system but that these effects are independent of explicit learning as indicated by item recall. |
Renske S. Hoedemaker; Peter C. Gordon The onset and time course of semantic priming during rapid recognition of visual words Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 5, pp. 881–902, 2017. @article{Hoedemaker2017a, In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming. |
Alexandra Hoffmann; Ulrich Ettinger; Gustavo A. Reyes del Paso; Stefan Duschek Executive function and cardiac autonomic regulation in depressive disorders Journal Article In: Brain and Cognition, vol. 118, pp. 108–117, 2017. @article{Hoffmann2017, Executive function impairments have been frequently observed in depressive disorders. Moreover, reduced heart rate variability (HRV) has repeatedly been described, especially in the high frequency band (i.e., respiratory sinus arrhythmia, RSA), suggesting lower vagal cardiac outflow. The study tested the hypothesis of involvement of low vagal tone in executive dysfunction in depression. In addition to RSA, HRV in the low frequency (LF) band was assessed. In 36 patients with depression and 36 healthy subjects, electrocardiography recordings were accomplished at rest and during performance of five executive function tasks (number-letter task, n-back task, continuous performance test, flanker task, and antisaccade task). Patients displayed increased error rates and longer reaction times in the task-switching condition of the number-letter task, in addition to increased error rates in the n-back task and the final of two blocks of the antisaccade task. In patients, both HRV parameters were lower during all experimental phases. RSA correlated negatively with reaction time during task-switching. This finding confirms reduced performance across different executive functions in depression and suggests that, in addition to RSA, LF HRV is also diminished. However, the hypothesis of involvement of low parasympathetic tone in executive dysfunction related to depression received only limited support. |
Sven Hohenstein; Hannes Matuschek; Reinhold Kliegl Linked linear mixed models: A joint analysis of fixation locations and fixation durations in natural reading Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 3, pp. 637–651, 2017. @article{Hohenstein2017, The complexity of eye-movement control during reading allows measurement of many dependent variables, the most prominent ones being fixation durations and their locations in words. In current practice, either variable may serve as dependent variable or covariate for the other in linear mixed models (LMMs) featuring also psycholinguistic covariates of word recognition and sentence comprehension. Rather than analyzing fixation location and duration with separate LMMs, we propose linking the two according to their sequential dependency. Specifically, we include predicted fixation location (estimated in the first LMM from psycholinguistic covariates) and its associated residual fixation location as covariates in the second, fixation-duration LMM. This linked LMM affords a distinction between direct and indirect effects (mediated through fixation location) of psycholinguistic covariates on fixation durations. Results confirm the robustness of distributed processing in the perceptual span. They also offer a resolution of the paradox of the inverted optimal viewing position (IOVP) effect (i.e., longer fixation durations in the center than at the beginning and end of words) although the opposite (i.e., an OVP effect) is predicted from default assumptions of psycholinguistic processing efficiency: The IOVP effect in fixation durations is due to the residual fixation-location covariate, presumably driven primarily by saccadic error, and the OVP effect (at least the left part of it) is uncovered with the predicted fixation-location covariate, capturing the indirect effects of psycholinguistic covariates. We expect that linked LMMs will be useful for the analysis of other dynamically related multiple outcomes, a conundrum of most psychonomic research. |
Linus Holm; Olympia Karampela; Fredrik Ullén; Guy Madison Executive control and working memory are involved in sub-second repetitive motor timing Journal Article In: Experimental Brain Research, vol. 235, no. 3, pp. 787–798, 2017. @article{Holm2017, The nature of the relationship between timing and cognition remains poorly understood. Cognitive control is known to be involved in discrete timing tasks involving durations above 1 s, but has not yet been demonstrated for repetitive motor timing below 1 s. We examined the latter in two continuation tapping experiments, by varying the cognitive load in a concurrent task. In Experiment 1, participants repeated a fixed three finger sequence (low executive load) or a pseudorandom sequence (high load) with either 524-, 733-, 1024- or 1431-ms inter-onset intervals (IOIs). High load increased timing variability for 524 and 733-ms IOIs but not for the longer IOIs. Experiment 2 attempted to replicate this finding for a concurrent memory task. Participants retained three letters (low working memory load) or seven letters (high load) while producing intervals (524- and 733-ms IOIs) with a drum stick. High load increased timing variability for both IOIs. Taken together, the experiments demonstrate that cognitive control processes influence sub-second repetitive motor timing. |
Gerald Hahn; Adrian Ponce-Alvarez; Cyril Monier; Giacomo Benvenuti; Arvind Kumar; Frédéric Chavane; Gustavo Deco; Yves Frégnac Spontaneous cortical activity is transiently poised close to criticality Journal Article In: PLoS Computational Biology, vol. 13, no. 5, pp. e1005543, 2017. @article{Hahn2017, Brain activity displays a large repertoire of dynamics across the sleep-wake cycle and even during anesthesia. It was suggested that criticality could serve as a unifying principle underlying the diversity of dynamics. This view has been supported by the observation of spontaneous bursts of cortical activity with scale-invariant sizes and durations, known as neuronal avalanches, in recordings of mesoscopic cortical signals. However, the existence of neuronal avalanches in spiking activity has been equivocal with studies reporting both its presence and absence. Here, we show that signs of criticality in spiking activity can change between synchronized and desynchronized cortical states. We analyzed the spontaneous activity in the primary visual cortex of the anesthetized cat and the awake monkey, and found that neuronal avalanches and thermodynamic indicators of criticality strongly depend on collective synchrony among neurons, LFP fluctuations, and behavioral state. We found that synchronized states are associated to criticality, large dynamical repertoire and prolonged epochs of eye closure, while desynchronized states are associated to sub-criticality, reduced dynamical repertoire, and eyes open conditions. Our results show that criticality in cortical dynamics is not stationary, but fluctuates during anesthesia and between different vigilance states. |
Clotilde Hainline; John-Ross Rizzo; Todd E. Hudson; Weiwei Dai; Joel Birkemeier; Jenelle Raynowska; Rachel C. Nolan; Lisena Hasanaj; Ivan Selesnick; Teresa C. Frohman; Elliot M. Frohman; Steven L. Galetta; Laura J. Balcer; Janet C. Rucker Capturing saccades in multiple sclerosis with a digitized test of rapid number naming Journal Article In: Journal of Neurology, vol. 264, no. 5, pp. 989–998, 2017. @article{Hainline2017, The King-Devick (K-D) test of rapid number naming is a visual performance measure that captures saccadic eye movements. Patients with multiple sclerosis (MS) have slowed K-D test times associated with neurologic disability and reduced quality of life. We assessed eye movements during the K-D test to identify characteristics associated with slowed times. Participants performed a computerized K-D test with video-oculography. The 25-Item National Eye Institute Visual Functioning Questionnaire (NEI-VFQ-25) and its 10-Item Neuro-Ophthalmic Supplement measured vision-specific quality of life (VSQOL). Among 25 participants with MS (age 37 ± 10 years, range 20-59) and 42 controls (age 33 ± 9 years, range 19-54), MS was associated with significantly longer (worse) K-D times (58.2 ± 19.8 vs. 43.8 ± 8.6 s |
Dorothea Hämmerer; Alexandra Hopkins; Matthew J. Betts; Anne Maaß; Raymond J. Dolan; Emrah Düzel In: Neurobiology of Aging, vol. 58, pp. 129–139, 2017. @article{Haemmerer2017, A better memory for negative emotional events is often attributed to a conjoint impact of increased arousal and noradrenergic modulation (NA). A decline in NA during aging is well documented but its impact on memory function during aging is unclear. Using pupil diameter (PD) as a proxy for NA, we examined age differences in memory for negative events in younger (18–30 years) and older (62–83 years) adults based on a segregation of early arousal to negative events, and later retrieval-related PD responses. In keeping with the hypothesis of reduced age-related NA influences, older adults showed attenuated induced PD responses to negative emotional events. The findings highlight a likely contribution of NA to negative emotional memory, mediated via arousal that may be compromised with aging. |
Paul Hands; Jenny C. A. Read True stereoscopic 3D cannot be simulated by shifting 2D content off the screen plane Journal Article In: Displays, vol. 48, pp. 35–40, 2017. @article{Hands2017, Generating stereoscopic 3D (S3D) content is expensive, so industry producers sometimes attempt to save money by including brief sections of 2D content displayed with a uniform disparity, i.e. the 2D image is geometrically shifted behind the screen plane. This manipulation is believed to produce an illusion of depth which, while not as powerful as true S3D, is nevertheless more compelling than simple 2D. Our study examined whether this belief is correct. 30 s clips from a nature documentary were shown in the original S3D, in ordinary 2D and in shifted versions of S3D and 2D. Participants were asked to determine the impression of depth on a 7 point Likert scale. There was a clear and highly significant difference between the S3D depth perception (mean 6.03) and the shifted 2D depth perception (mean 4.13) (P = 0.002, ANOVA). There was no difference between ordinary 2D presented on the screen plane, and the shifted 2D. We conclude that the shifted 2D method not only fails to mimic the depth effect of true S3D, it in fact has no benefit over ordinary 2D in terms of the depth illusion created. This could impact viewing habits of people who notice the difference in depth quality. |
Jessica Hanley; David E. Warren; Natalie Glass; Daniel Tranel; Matthew Karam; Joseph Buckwalter Visual interpretation of plain radiographs in orthopaedics using eye-tracking technology Journal Article In: The Iowa Orthopaedic Journal, vol. 37, pp. 225–231, 2017. @article{Hanley2017, BACKGROUND: Despite the importance of radiographic interpretation in orthopaedics, there not a clear understanding of the specific visual strategies used while analyzing a plain film. Eyetracking technology allows for the objective study of eye movements while performing a dynamic task, such as reading X-rays. Our study looks to elucidate objective differences in image interpretation between novice and experienced orthopaedic trainees using this novel technology. METHODS: Novice and experienced orthopaedic trainees (N=23) were asked to interpret AP pelvis films, searching for unilateral acetabular fractures while eye-movements were assessed for pattern of gaze, fixation on regions of interest, and time of fixation at regions of interest. Participants were asked to label radiographs as "fractured" or "not fractured." If "fractured", the participant was asked to determine the fracture pattern. A control condition employed Ekman faces and participants judged gender and facial emotion. Data were analyzed for variation in eye movements between participants, accuracy of responses, and response time. RESULTS: Accuracy: There was no significant difference by level of training for accurately identifing fracture images (p=0.3255). There was a significant association between higher level of training and correctly identifying non-fractured images (p=0.0155); greater training was also associated with more success in identifying the correct Judet-Letournel classification (p=0.0029). Response Time: Greater training was associated with faster response times (p=0.0009 for fracture images and 0.0012 for non-fractured images). Fixation Duration: There was no correlation of average fixation duration with experience (p=0.9632). Regions of Interest (ROIs): More experience was associated with an average of two fewer fixated ROIs (p=0.0047). Number of Fixations: Increased experience was associated with fewer fixations overall (p=0.0007). CONCLUSIONS: Experience has a significant impact on both accuracy and efficiency in interpreting plain films. Greater training is associated with a shift toward a more efficient and thorough assessment of plain radiographs. Eyetracking is a useful descriptive tool in the setting of plain film interpretation. CLINICAL RELEVANCE: We propose further assessment of eye movements in larger populations of orthopaedic surgeons, including staff orthopaedists. Describing the differences between novice and expert interpretation may provide insight into ways to accelerate the learning process in young orthopaedists. |
Juan Haro; Marc Guasch; Blanca Vallès; Pilar Ferré Is pupillary response a reliable index of word recognition? Evidence from a delayed lexical decision task Journal Article In: Behavior Research Methods, vol. 49, no. 5, pp. 1930–1938, 2017. @article{Haro2017, Previous word recognition studies have shown that the pupillary response is sensitive to a word's frequency. However, such a pupillary effect may be due to the process of executing a response, instead of being an index of word processing. With the aim of exploring this possibility, we recorded the pupillary responses in two experiments involving a lexical decision task (LDT). In the first experiment, participants completed a standard LDT, whereas in the second they performed a delayed LDT. The delay in the response allowed us to compare pupil dilations with and without the response execution component. The results showed that pupillary response was modulated by word frequency in both the standard and the delayed LDT. This finding supports the reliability of using pupillometry for word recognition research. Importantly, our results also suggest that tasks that do not require a response during pupil recording lead to clearer and stronger effects. |
Anthony M. Harris; Roger W. Remington Contextual cueing improves attentional guidance, even when guidance is supposedly optimal Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 5, pp. 926–940, 2017. @article{Harris2017, Visual search through previously encountered contexts typically produces reduced reaction times compared with search through novel contexts. This contextual cueing benefit is well established, but there is debate regarding its underlying mechanisms. Eye-tracking studies have consistently shown reduced number of fixations with repetition, supporting improvements in attentional guidance as the source of contextual cueing. However, contextual cueing benefits have been shown in conditions in which attentional guidance should already be optimal—namely, when attention is captured to the target location by an abrupt onset, or under pop-out conditions. These results have been used to argue for a response-related account of contextual cueing. Here, we combine eye tracking with response time to examine the mechanisms behind contextual cueing in spatially cued and pop-out conditions. Three experiments find consistent response time benefits with repetition, which appear to be driven almost entirely by a reduction in number of fixations, supporting improved attentional guidance as the mechanism behind contextual cueing. No differences were observed in the time between fixating the target and responding—our proxy for response related processes. Furthermore, the correlation between contextual cueing magnitude and the reduction in number of fixations on repeated contexts approaches 1. These results argue strongly that attentional guidance is facilitated by familiar search contexts, even when guidance is near-optimal. |
C. Hübner; Alexander C. Schütz Numerosity estimation benefits from transsaccadic information integration Journal Article In: Journal of Vision, vol. 17, no. 13, pp. 1–16, 2017. @article{Huebner2017, Humans achieve a stable and homogeneous representation of their visual environment, although visual processing varies across the visual field. Here we investigated the circumstances under which peripheral and foveal information is integrated for numerosity estimation across saccades. We asked our participants to judge the number of black and white dots on a screen. Information was presented either in the periphery before a saccade, in the fovea after a saccade, or in both areas consecutively to measure transsaccadic integration. In contrast to previous findings, we found an underestimation of numerosity for foveal presentation and an overestimation for peripheral presentation. We used a maximum-likelihood model to predict accuracy and reliability in the transsaccadic condition based on peripheral and foveal values. We found near-optimal integration of peripheral and foveal information, consistently with previous findings about orientation integration. In three consecutive experiments, we disrupted object continuity between the peripheral and foveal presentations to probe the limits of transsaccadic integration. Even for global changes on our numerosity stimuli, no influence of object discontinuity was observed. Overall, our results suggest that transsaccadic integration is a robust mechanism that also works for complex visual features such as numerosity and is operative despite internal or external mismatches between foveal and peripheral information. Transsaccadic integration facilitates an accurate and reliable perception of our environment. |
Anneline Huck; Robin L. Thompson; Madeline Cruice; Jane Marshall The influence of sense-contingent argument structure frequencies on ambiguity resolution in aphasia Journal Article In: Neuropsychologia, vol. 100, pp. 171–194, 2017. @article{Huck2017a, Verbs with multiple senses can show varying argument structure frequencies, depending on the underlying sense. When acknowledge is used to mean ‘recognise', it takes a direct object (DO), but when it is used to mean ‘admit' it prefers a sentence complement (SC). The purpose of this study was to investigate whether people with aphasia (PWA) can exploit such meaning-structure probabilities during the reading of temporarily ambiguous sentences, as demonstrated for neurologically healthy individuals (NHI) in a self-paced reading study (Hare et al., 2003). Eleven people with mild or moderate aphasia and eleven neurologically healthy control participants read sentences while their eyes were tracked. Using adapted materials from the study by Hare et al. target sentences containing an SC structure (e.g. He acknowledged (that) his friends would probably help him a lot) were presented following a context prime that biased either a direct object (DO-bias) or sentence complement (SC-bias) reading of the verbs. Half of the stimuli sentences did not contain that so made the post verbal noun phrase (his friends) structurally ambiguous. Both groups of participants were influenced by structural ambiguity as well as by the context bias, indicating that PWA can, like NHI, use their knowledge of a verb's sense-based argument structure frequency during online sentence reading. However, the individuals with aphasia showed delayed reading patterns and some individual differences in their sensitivity to context and ambiguity cues. These differences compared to the NHI may contribute to difficulties in sentence comprehension in aphasia. |
Anneline Huck; Robin L. Thompson; Madeline Cruice; Jane Marshall Effects of word frequency and contextual predictability on sentence reading in aphasia: An eye movement analysis Journal Article In: Aphasiology, vol. 31, no. 11, pp. 1307–1332, 2017. @article{Huck2017, Background: Mild reading difficulties are a pervasive symptom of aphasia. While much research in aphasia has been devoted to the study of single word reading, little is known about the process of (silent) sentence reading. Reading research in the non-brain-damaged population has benefited from the use of eye-tracking methodology, allowing inferences on cognitive processing without participants making an articulatory response. This body of research identified two factors, which strongly influence reading at the sentence level: word frequency and contextual predictability (influence of context).Aims: The main aim of this study was to investigate whether word frequency and contextual predictability influence sentence reading by people with aphasia (PWA), in parallel to that of neurologically healthy individuals (NHI). A second aim was to examine whether readers with aphasia show individual differences in the effects, and whether these are related to their underlying language profile. Methods & Procedures: Seventeen PWA and associated mild reading difficulties and 20 NHI took part in this study. Individuals with aphasia completed a range of language assessments. For the eye-tracking experiment, participants silently read sentences that included target words varying in word frequency and predictability while their eye movements were recorded. Comprehension accuracy, fixation durations, and the probability of first-pass fixations and first-pass regressions were measured. Outcomes & Results: Eye movements by both groups were significantly influenced by word frequency and predictability, but the predictability effect was stronger for the PWA than the neurologically healthy participants. Additionally, effects of word frequency and predictability were independent for the NHI, but the individuals with aphasia showed a more interactive pattern. Correlational analyses revealed (i) a significant relationship between lexical-semantic impairments and the word frequency effect score and (ii) a marginally significant association between the sentence comprehension skills and the predictability effect score. Conclusions: Consistent with compensatory processing theories, these findings indicate that decreased reading efficiency may trigger a more interactive reading strategy that aims to compensate for poorer reading by putting more emphasis on a sentence context, particularly for low-frequency words. For those individuals who have difficulties applying the strategy automatically, using a sentence context could be a beneficial strategy to focus on in reading intervention. |
Erika K. Hussey; J. Isaiah Harbison; Susan Teubner-Rhodes; Alan Mishler; Kayla Velnoskey; Jared M. Novick Memory and language improvements following cognitive control training Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 1, pp. 23–58, 2017. @article{Hussey2017, Cognitive control refers to adjusting thoughts and actions when confronted with conflict during information processing. We tested whether this ability is causally linked to performance on certain language and memory tasks by using cognitive control training to systematically modulate people's ability to resolve information-conflict across domains. Different groups of subjects trained on 1 of 3 minimally different versions of an n-back task: n-back-with-lures (High-Conflict), n-back-without-lures (Low- Conflict), or 3-back-without-lures (3-Back). Subjects completed a battery of recognition memory and language processing tasks that comprised both high- and low-conflict conditions before and after training. We compared the transfer profiles of (a) the High- versus Low-Conflict groups to test how conflict resolution training contributes to transfer effects, and (b) the 3-Back versus Low-Conflict groups to test for differences not involving cognitive control. High-Conflict training—but not Low-Conflict training— produced discernable benefits on several untrained transfer tasks, but only under selective conditions requiring cognitive control. This suggests that the conflict-focused intervention influenced functioning on ostensibly different outcome measures across memory and language domains. 3-Back training resulted in occasional improvements on the outcome measures, but these were not selective for conditions involving conflict resolution. We conclude that domain-general cognitive control mechanisms are plastic, at least temporarily, and may play a causal role in linguistic and nonlinguistic performance. |
John P. Hutson; Tim J. Smith; Joseph P. Magliano; Lester C. Loschky What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film Journal Article In: Cognitive Research: Principles and Implications, vol. 2, no. 46, pp. 1–30, 2017. @article{Hutson2017, Film is ubiquitous, but the processes that guide viewers' attention while viewing film narratives are poorly understood. In fact, many film theorists and practitioners disagree on whether the film stimulus (bottom-up) or the viewer (top-down) is more important in determining how we watch movies. Reading research has shown a strong connection between eye movements and comprehension, and scene perception studies have shown strong effects of viewing tasks on eye movements, but such idiosyncratic top-down control of gaze in film would be anathema to the universal control mainstream filmmakers typically aim for. Thus, in two experiments we tested whether the eye movements and comprehension relationship similarly held in a classic film example, the famous opening scene of Orson Welles' Touch of Evil (Welles & Zugsmith, Touch of Evil, 1958). Comprehension differences were compared with more volitionally controlled task-based effects on eye movements. To investigate the effects of comprehension on eye movements during film viewing, we manipulated viewers' comprehension by starting participants at different points in a film, and then tracked their eyes. Overall, the manipulation created large differences in comprehension, but only produced modest differences in eye movements. To amplify top-down effects on eye movements, a task manipulation was designed to prioritize peripheral scene features: a map task. This task manipulation created large differences in eye movements when compared to participants freely viewing the clip for comprehension. Thus, to allow for strong, volitional top-down control of eye movements in film, task manipulations need to make features that are important to narrative comprehension irrelevant to the viewing task. The evidence provided by this experimental case study suggests that filmmakers' belief in their ability to create systematic gaze behavior across viewers is confirmed, but that this does not indicate universally similar comprehension of the film narrative. |
Duong Huynh; Srimant P. Tripathy; Harold E. Bedell; Haluk Öğmen The reference frame for encoding and retention of motion depends on stimulus set size Journal Article In: Attention, Perception, & Psychophysics, vol. 79, no. 3, pp. 888–910, 2017. @article{Huynh2017, The goal of this study was to investigate the reference frames used in perceptual encoding and storage ofvisual motion information. In our experiments, observers viewed multiple moving objects and reported the direction ofmotion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding ofmotion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case ofcomplex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding ofmotion information. |
Sara Iacozza; Albert Costa; Jon Andoni Duñabeitia What do your eyes reveal about your foreign language? Reading emotional sentences in a native and foreign language Journal Article In: PLoS ONE, vol. 12, no. 10, pp. e0186027, 2017. @article{Iacozza2017, Foreign languages are often learned in emotionally neutral academic environments which differ greatly from the familiar context where native languages are acquired. This difference in learning contexts has been argued to lead to reduced emotional resonance when confronted with a foreign language. In the current study, we investigated whether the reactivity of the sympathetic nervous system in response to emotionally-charged stimuli is reduced in a foreign language. To this end, pupil sizes were recorded while reading aloud emotional sentences in the native or foreign language. Additionally, subjective ratings of emotional impact were provided after reading each sentence, allowing us to further investigate foreign language effects on explicit emotional understanding. Pupillary responses showed a larger effect of emotion in the native than in the foreign language. However, such a difference was not present for explicit ratings of emotionality. These results reveal that the sympathetic nervous system reacts differently depending on the language context, which in turns suggests a deeper emotional processing when reading in a native compared to a foreign language. |
Guilhem Ibos; David J. Freedman Sequential sensory and decision processing in posterior parietal cortex Journal Article In: eLife, vol. 6, pp. 1–19, 2017. @article{Ibos2017, <p>Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for).</p> |
Jaime S. Ide; Hsiang C. Tung; Cheng-Ta Yang; Yuan-Chi Tseng; Chiang-Shan R. Li In: Frontiers in Human Neuroscience, vol. 11, pp. 222, 2017. @article{Ide2017, Impulsivity is a personality trait of clinical importance. Extant research focuses on frontostriatal mechanisms of impulsivity and how executive functions are compromised in impulsive individuals. Imaging studies employing voxel based morphometry highlighted impulsivity-related changes in gray matter concentrations in a wide array of cerebral structures. In particular, whereas prefrontal cortical areas appear to show structural alterations in individuals with a neuropsychiatric condition, the findings are less than consistent in the healthy population. Here, in a sample (n = 113) of young adults assessed for Barratt impulsivity, we controlled for age, gender and alcohol use, and showed that higher impulsivity score is associated with increased gray matter volume (GMV) in bilateral medial parietal and occipital cortices known to represent the peripheral visual field. When impulsivity components were assessed, we observed that this increase in parieto-occipital cortical volume is correlated with inattention and non-planning but not motor subscore. In a separate behavioral experiment of 10 young adults, we demonstrated that impulsive individuals are more vulnerable to the influence of a distractor on target detection in an attention task. If replicated, these findings together suggest aberrant visual attention as a neural correlate of an impulsive personality trait in neurotypical individuals and need to be reconciled with the literature that focuses on frontal dysfunctions. |
Jessica L. Irons; Tamara Gradden; Angel Zhang; Xuming He; Nick Barnes; Adele F. Scott; Elinor McKone Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing Journal Article In: Vision Research, vol. 137, pp. 61–79, 2017. @article{Irons2017a, The visual prosthesis (or “bionic eye”) has become a reality but provides a low resolution view of the world. Simulating prosthetic vision in normal-vision observers, previous studies report good face recognition ability using tasks that allow recognition to be achieved on the basis of information that survives low resolution well, including basic category (sex, age) and extra-face information (hairstyle, glasses). Here, we test within-category individuation for face-only information (e.g., distinguishing between multiple Caucasian young men with hair covered). Under these conditions, recognition was poor (although above chance) even for a simulated 40 × 40 array with all phosphene elements assumed functional, a resolution above the upper end of current-generation prosthetic implants. This indicates that a significant challenge is to develop methods to improve face identity recognition. Inspired by “bionic ear” improvements achieved by altering signal input to match high-level perceptual (speech) requirements, we test a high-level perceptual enhancement of face images, namely face caricaturing (exaggerating identity information away from an average face). Results show caricaturing improved identity recognition in memory and/or perception (degree by which two faces look dissimilar) down to a resolution of 32 × 32 with 30% phosphene dropout. Findings imply caricaturing may offer benefits for patients at resolutions realistic for some current-generation or in-development implants. |
Jessica L. Irons; Minjeong Jeon; Andrew B. Leber Pre-stimulus pupil dilation and the preparatory control of attention Journal Article In: PLoS ONE, vol. 12, no. 12, pp. e0188787, 2017. @article{Irons2017, Task preparation involves multiple component processes, including a general evaluative process that signals the need for adjustments in control, and the engagement of task-specific control settings. Here we examined the dynamics of these different mechanisms in preparing the attentional control system for visual search. We explored preparatory activity using pupil dilation, a well-established measure of task demands and effortful processing. In an initial exploratory experiment, participants were cued at the start of each trial to search for either a salient color singleton target (an easy search task) or a low-salience shape singleton target (a difficult search task). Pupil dilation was measured during the preparation period from cue onset to search display onset. Mean dilation was larger in preparation for the difficult shape target than the easy color target. In two additional experiments, we sought to vary effects of evaluative processing and task-specific preparation separately. Experiment 2 showed that when the color and shape search tasks were matched for difficulty, the shape target no longer evoked larger dilations, and the pattern of results was in fact reversed. In Experiment 3, we manipulated difficulty within a single feature dimension, and found that the difficult search task evoked larger dilations. These results suggest that pupil dilation reflects expectations of difficulty in preparation for a search task, consistent with the activity of an evaluative mechanism. We did not find consistent evidence for relationship between pupil dilation and search performance (accuracy and response timing), suggesting that pupil dilation during search preparation may not be strongly linked to ongoing task-specific preparation. |
Roxane J. Itier; Karly N. Neath-Tavares Effects of task demands on the early neural processing of fearful and happy facial expressions Journal Article In: Brain Research, vol. 1663, pp. 38–50, 2017. @article{Itier2017, Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350 ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350 ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350 ms of visual processing. |
Miho Iwasaki; Kodai Tomita; Yasuki Noguchi Non-uniform transformation of subjective time during action preparation Journal Article In: Cognition, vol. 160, pp. 51–61, 2017. @article{Iwasaki2017, Although many studies have reported a distortion of subjective (internal) time during preparation and execution of actions, it is highly controversial whether actions cause a dilation or compression of time. In the present study, we tested a hypothesis that the previous controversy (dilation vs. compression) partly resulted from a mixture of two types of sensory inputs on which a time length was estimated; some studies asked subjects to measure the time of presentation for a single continuous stimulus (stimulus period, e.g. the duration of a long-lasting visual stimulus on a monitor) while others required estimation of a period without continuous stimulations (no-stimulus period, e.g. an inter-stimulus interval between two flashes). Results of our five experiments supported this hypothesis, showing that action preparation induced a dilation of a stimulus period, whereas a no-stimulus period was not subject to this dilation and sometimes can be compressed by action preparation. Those results provided a new insight into a previous view assuming a uniform dilation or compression of subjective time by actions. Our findings about the distinction between stimulus and no-stimulus periods also might contribute to a resolution of mixed results (action-induced dilation vs. compression) in a previous literature. |
Syaheed B. Jabar; Alex Filipowicz; Britt Anderson Tuned by experience: How orientation probability modulates early perceptual processing Journal Article In: Vision Research, vol. 138, pp. 86–96, 2017. @article{Jabar2017a, Probable stimuli are more often and more quickly detected. While stimulus probability is known to affect decision-making, it can also be explained as a perceptual phenomenon. Using spatial gratings, we have previously shown that probable orientations are also more precisely estimated, even while participants remained naive to the manipulation. We conducted an electrophysiological study to investigate the effect that probability has on perception and visual-evoked potentials. In line with previous studies on oddballs and stimulus prevalence, low-probability orientations were associated with a greater late positive ‘P300' component which might be related to either surprise or decision-making. However, the early ‘C1' component, thought to reflect V1 processing, was dampened for high-probability orientations while later P1 and N1 components were unaffected. Exploratory analyses revealed a participant-level correlation between C1 and P300 amplitudes, suggesting a link between perceptual processing and decision-making. We discuss how these probability effects could be indicative of sharpening of neurons preferring the probable orientations, due either to perceptual learning, or to feature-based attention. |
Syaheed B. Jabar; Alex Filipowicz; Britt Anderson In: Attention, Perception, & Psychophysics, vol. 79, no. 8, pp. 2338–2353, 2017. @article{Jabar2017, When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute. |
Stephanie Jainta; Mirela Nikolova; Simon P. Liversedge Does text contrast mediate binocular advantages in reading? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 1, pp. 55–68, 2017. @article{Jainta2017, Humans typically make use of both of their eyes in reading and efficient processes of binocular vision provide a stable, single percept of the text. Binocular reading also comes with an advantage: reading speed is high and word frequency effects (i.e., faster lexical processing of words that are more often encountered in a language) emerge during fixations, which is not the case for monocular reading (Jainta, Blythe, & Liversedge, 2014). A potential contributor to this benefit is the reduced contrast in monocular reading: reduced text contrasts in binocular reading are known to slow down reading and word identification (Reingold & Rayner, 2006). To investigate whether contrast reduction mediates the binocular advantage, we first replicated increased reading time and nullified frequency effects for monocular reading (Experiment 1). Next, we reduced the contrast for binocular but whole sentences to 70% (Weber-contrast); this reading condition resembled monocular reading, but found no effect on reading speed and word identification (Experiment 2). A reasonable conclusion, therefore, was that a reduction in contrast is not the (primary) factor that mediates less efficient lexical processing under monocular reading. In a third experiment (Experiment 3) we reduced the sentence contrast to 40% and the pattern of results showed that, globally, reading was slowed down but clear word frequency effects were present in the data. Thus, word identification processes during reading (i.e., the word frequency effect) were qualitatively different in monocular reading compared with effects observed when text was read with substantially reduced contrast. |
Gernot Horstmann; Stefanie I. Becker; Daniel Ernst Dwelling, rescanning, and skipping of distractors explain search efficiency in difficult search better than guidance by the target Journal Article In: Visual Cognition, vol. 25, no. 1-3, pp. 291–305, 2017. @article{Horstmann2017, Prominent models of overt and covert visual search focus on explaining search efficiency by visual guidance. That some searches are fast whereas others are slow is explained by the ability of the target to guide attention to the target's position. Comparably little attention is given to other variables that might also influence search efficiency, such as dwelling on distractors, skipping distractors, and revisiting distractors. Here, we examine the relative contributions of dwelling, skipping, rescanning, and the use of visual guidance, in explaining visual search times in general, and the similarity effect in particular. The hallmark of the similarity effect is more efficient search for a target that is dissimilar to the distractors compared to a target that is similar to the distractors. In the present experiment, participants have to find an emotional face target among nine neutral face non-targets. In different blocks, the target is either more or less similar to the non-targets. Eye-tracking is used to separately measure selection latency, dwelling on distractors, and skipping and revisiting of distractors. As expected, visual search times show a large similarity effect. Similarity also has strong effects on dwelling, skipping, and revisiting, but only weak effects on visual guidance. Regression analyses show that dwelling, skipping, and revisiting determine search times on trial level. The influence of dwelling and revisiting is stronger in target absent than in target present trials, whereas the opposite is true for skipping. The similarity effect is best explained by dwelling. Additionally, including a measure of guidance does not yield substantial benefits. In sum, results indicate that guidance by the target is not the sole principle behind fast search; rather, distractors are less often skipped, more often visited, and longer dwelled on in slow search conditions. |
Jaakko Hotta; Jukka Saari; Miika Koskinen; Yevhen Hlushchuk; Nina Forss; Riitta Hari Abnormal brain responses to action observation in complex regional pain syndrome Journal Article In: Journal of Pain, vol. 18, no. 3, pp. 255–265, 2017. @article{Hotta2017, Patients with complex regional pain syndrome (CRPS) display various abnormalities in central motor function, and their pain is intensified when they perform or just observe motor actions. In this study, we examined the abnormalities of brain responses to action observation in CRPS. We analyzed 3-T functional magnetic resonance images from 13 upper limb CRPS patients (all female, ages 31–58 years) and 13 healthy, age- and sex-matched control subjects. The functional magnetic resonance imaging data were acquired while the subjects viewed brief videos of hand actions shown in the first-person perspective. A pattern-classification analysis was applied to characterize brain areas where the activation pattern differed between CRPS patients and healthy subjects. Brain areas with statistically significant group differences (q < .05, false discovery rate-corrected) included the hand representation area in the sensorimotor cortex, inferior frontal gyrus, secondary somatosensory cortex, inferior parietal lobule, orbitofrontal cortex, and thalamus. Our findings indicate that CRPS impairs action observation by affecting brain areas related to pain processing and motor control. Perspective This article shows that in CRPS, the observation of others' motor actions induces abnormal neural activity in brain areas essential for sensorimotor functions and pain. These results build the cerebral basis for action-observation impairments in CRPS. |
Michael C. Hout; Arryn Robbins; Hayward J. Godwin; Gemma Fitzsimmons; Collin Scarince Categorical templates are more useful when features are consistent: Evidence from eye movements during search for societally important vehicles Journal Article In: Attention, Perception, & Psychophysics, vol. 79, pp. 1578–1592, 2017. @article{Hout2017, Unlike in laboratory visual search tasks—wherein participants are typically presented with a pictorial represen- tation of the item they are asked to seek out—in real-world searches, the observer rarely has veridical knowledge of the visual features that define their target. During categorical search, observers look for any instance of a categorically de- fined target (e.g., helping a family member look for their mo- bile phone). In these circumstances, people may not have in- formation about noncritical features (e.g., the phone'scolor), and must instead create a broad mental representation using the features that define (or are typical of) the category of ob- jects they are seeking out (e.g., modern phones are typically rectangular and thin). In the current investigation (Experiment 1), using a categorical visual search task, we add to the body ofevidence suggesting that categorical templates are effective enough to conduct efficient visual searches. When color in- formation was available (Experiment 1a), attentional guid- ance, attention restriction, and object identification were en- hanced when participants looked for categories with consis- tent features (e.g., ambulances) relative to categories with more variable features (e.g., sedans). When color information was removed (Experiment 1b), attention benefits disappeared, but object recognition was still better for feature-consistent target categories. In Experiment 2, we empirically validated the relative homogeneity of our societally important vehicle stimuli. Taken together, our results are in line with a category-consistent view of categorical target templates (Yu, Maxfield, & Zelinsky in, Psychological Science, 2016. doi:10.1177/ 0956797616640237), and suggest that when features of a category are consistent and predictable, searchers can create mental representations that allow for the efficient guidance and restriction ofattention as well as swift object identification. |
Philippa L. Howard; Simon P. Liversedge; Valerie Benson Processing of co-reference in autism spectrum disorder Journal Article In: Autism Research, vol. 10, no. 12, pp. 1968–1980, 2017. @article{Howard2017, Accuracy for reading comprehension and inferencing tasks has previously been reported as reduced for individuals with autism spectrum disorder (ASD), relative to typically developing (TD) controls. In this study, we used an eye movements and reading paradigm to examine whether this difference in performance accuracy is underpinned by differences in the inferential work required to compute a co-referential link. Participants read two sentences that contained a category noun (e.g., bird) that was preceded by and co-referred to an exemplar that was either typical (e.g., pigeon) or atypical (e.g., penguin). Both TD and ASD participants showed an effect of typicality for gaze durations upon the category noun, with longer times being observed when the exemplar was atypical, in comparison to typical. No group differences or interactions were detected for target processing, and verbal language proficiency was found to predict general reading and inferential skill. The only difference between groups was that individuals with ASD engaged in more re-reading than TD participants. These data suggest that readers with ASD do not differ in the efficiency with which they compute anaphoric links on-line during reading. |
Philippa L. Howard; Simon P. Liversedge; Valerie Benson Investigating the use of world knowledge during on-line comprehension in adults with Autism Spectrum Disorder Journal Article In: Journal of Autism and Developmental Disorders, vol. 47, no. 7, pp. 2039–2053, 2017. @article{Howard2017a, The on-line use of world knowledge during reading was examined in adults with autism spectrum disorder (ASD). Both ASD and typically developed adults read sentences that included plausible, implausible and anomalous thematic relations, as their eye movements were monitored. No group differences in the speed of detection of the anomalous violations were found, but the ASD group showed a delay in detection of implausible thematic relations. These findings suggest that there are subtle differences in the speed of world knowledge processing during reading in ASD. |
Philippa L. Howard; Simon P. Liversedge; Valerie Benson Benchmark eye movement effects during natural reading in autism spectrum disorder Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 1, pp. 109–127, 2017. @article{Howard2017b, In 2 experiments, eye tracking methodology was used to assess on-line lexical, syntactic and semantic processing in autism spectrum disorder (ASD). In Experiment 1, lexical identification was examined by manipulating the frequency of target words. Both typically developed (TD) and ASD readers showed normal frequency effects, suggesting that the processes TD and ASD readers engage in to identify words are comparable. In Experiment 2, syntactic parsing and semantic interpretation requiring the on-line use of world knowledge were examined, by having participants read garden path sentences containing an ambiguous prepositional phrase. Both groups showed normal garden path effects when reading low-attached sentences and the time course of reading disruption was comparable between groups. This suggests that not only do ASD readers hold similar syntactic preferences to TD readers, but also that they use world knowledge on-line during reading. Together, these experiments demonstrate that the initial construction of sentence interpretation appears to be intact in ASD. However, the finding that ASD readers skip target words less often in Experiment 2, and take longer to read sentences during second pass for both experiments, suggests that they adopt a more cautious reading strategy and take longer to evaluate their sentence interpretation prior to making a manual response. |
Jing Huang; Karl R. Gegenfurtner; Alexander C. Schutz; Jutta Billino Age effects on saccadic adaptation: Evidence from different paradigms reveals specific vulnerabilities Journal Article In: Journal of Vision, vol. 17, no. 6, pp. 1–18, 2017. @article{Huang2017, Saccadic eye movements provide an opportunity to study closely interwoven perceptual, motor, and cognitive changes during aging. Here, we investigated age effects on different mechanisms of saccadic plasticity. We compared age effects in two different adaptation paradigms that tap into low- and high-level adaptation processes. A total of 27 senior adults and 25 young adults participated in our experiments. In our first experiment, we elicited adaptation by a double-step paradigm, which is designed to trigger primarily lowlevel, gradual motor adaptation. Age groups showed equivalent adaptation of saccadic gain. In our second experiment, adaptation was induced by a perceptual task that emphasizes high-level, fast processes. We consistently found no evidence for age-related differences in low-level adaptation; however, the fast adaptation response was significantly more pronounced in the young adult group. We conclude that low-level motor adaptation is robust during healthy aging but that high-level contributions, presumably involving executive strategies, are subject to age-related decline. Our findings emphasize the need to differentiate between specific aging processes in order to understand functional decline and stability across the adult life span. |
Nicholas Huang; Mounya Elhilali Auditory salience using natural soundscapes Journal Article In: The Journal of the Acoustical Society of America, vol. 141, no. 3, pp. 2163–2176, 2017. @article{Huang2017a, Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience. |
Po Sheng Huang An exploratory study on remote associates problem solving: Evidence of eye movement indicators Journal Article In: Thinking Skills and Creativity, vol. 24, pp. 63–72, 2017. @article{Huang2017b, In recent years, remote associates problems have been widely used to measure creative processes. However, studies have rarely explored the processes involved in remote associates problem solving. The main purpose of this study was to record eye movements while participants solved twelve remote associates problems compiled by Huang (2014). The results show the following: (1) The mean fixation duration gradually increases throughout the problem-solving process, which indicates that more problem solvers encounter impasses over the course of problem solving. This result supports the “impasse encounter” phase of insight. (2) During the initial period of problem solving, individuals display more regression counts in the fixation region than in the key region, which supports the idea that the impasses are caused by inappropriate initial representation. (3) During the middle period of the problem-solving process, the time individuals spend gazing at the key region increases, while the time that they spend gazing at the fixation region decreases. This pattern supports the “impasse resolution and insight” phase of insight. Finally, we compare the differences in eye movement between insight and remote associates problem solving. |
Jason Hubbard; David Kuhns; Theo A. J. Schäfer; Ulrich Mayr Is conflict adaptation due to active regulation or passive carry-over? Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 3, pp. 385–393, 2017. @article{Hubbard2017, Conflict-adaptation effects (i.e., reduced response-time costs on high-conflict trials following high-conflict trials) supposedly represent our cognitive system's ability to regulate itself according to current processing demands. However, currently it is not clear whether these effects reflect conflict-triggered, active regulation, or passive carry-over of previous-trial control settings. We used eye movements to examine whether the degree of experienced conflict modulates conflict-adaptation effects, as the conflict-triggered regulation view predicts. Across 2 experiments in which participants had to identify a target stimulus based on an endogenous cue while—on conflict trials—having to resist a sudden-onset distractor, we found a clear indication of conflict adaptation. This adaptation effect disappeared however, when participants inadvertently fixated the sudden-onset distractor on the previous trial—that is, when they experienced a high degree of conflict. This pattern of results suggests that conflict adaptation can be explained parsimoniously in terms of a broader memory process that retains recently adopted control settings across trials. |
Siobhán Harty; Peter R. Murphy; Ian H. Robertson; Redmond G. O'Connell Parsing the neural signatures of reduced error detection in older age Journal Article In: NeuroImage, vol. 161, pp. 43–55, 2017. @article{Harty2017, Recent work has demonstrated that explicit error detection relies on a neural evidence accumulation process that can be traced in the human electroencephalogram (EEG). Here, we sought to establish the impact of natural aging on this process by recording EEG from young (18–35 years) and older adults (65–88 years) during the performance of a Go/No-Go paradigm in which participants were required to overtly signal their errors. Despite performing the task with equivalent accuracy, older adults reported substantially fewer errors, and the timing of their reports were both slower and more variable. These behavioral differences were linked to three key neurophysiological changes reflecting distinct parameters of the error detection decision process: a reduction in medial frontal delta/theta (2–7 Hz) activity, indicating diminished top-down input to the decision process; a slower rate of evidence accumulation as indexed by the rate of rise of a centro-parietal signal, known as the error positivity; and a higher motor execution threshold as indexed by lateralized beta-band (16–30 Hz) activity. Our data provide novel insight into how the natural aging process affects the neural underpinnings of error detection. |
Hannah Harvey; Hayward J. Godwin; Gemma Fitzsimmons; Simon P. Liversedge; Robin Walker Oculomotor and linguistic processing effects in reading dynamic horizontally scrolling text Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 3, pp. 518–536, 2017. @article{Harvey2017, Two experiments are reported investigating oculomotor behavior and linguistic processing when reading dynamic horizontally scrolling text (compared to reading normal static text). Three factors known to modulate processing time in normal reading were investigated: Word length and word frequency were examined in Experiment 1, and target word predictability in Experiment 2. An analysis of global oculomotor behavior across the 2 experiments showed that participants made fewer and longer fixations when reading scrolling text, with shorter progressive and regressive saccades between these fixations. Comparisons of the linguistic manipulations showed evidence of a dissociation between word-level and sentence-level processing. Word-level processing (Experiment 1) was preserved for the dynamic scrolling text condition with no difference in length and frequency effects between scrolling and static text formats. However, sentence-level integration (Experiment 2) was reduced for scrolling compared to static text in that we obtained no early facilitation effect for predictable words under scrolling text conditions. (PsycINFO Database Record |
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd In: Journal of Management in Engineering, vol. 33, no. 5, pp. 1–17, 2017. @article{Hasanzadeh2017a, Although several studies have highlighted the importance of attention in reducing the number of injuries in the construction industry, few have attempted to empirically measure the attention of construction workers. One technique that can be used to measure worker attention is eye tracking, which is widely accepted as the most direct and continuous measure of attention because where one looks is highly correlated with where one is focusing his or her attention. Thus, with the fundamental objective of measuring the impacts of safety knowledge (specifically, training, work experience, and injury exposure) on construction workers' attentional allocation, this study demonstrates the application of eye tracking to the realm of construction safety practices. To achieve this objective, a laboratory experiment was designed in which participants identified safety hazards presented in 35 construction site images ordered randomly, each of which showed multiple hazards varying in safety risk. During the experiment, the eye movements of 27 construction workers were recorded using a head-mounted EyeLink II system. The impact of worker safety knowledge in terms of training, work experience, and injury exposure (independent variables) on eye-tracking metrics (dependent variables) was then assessed by implementing numerous permutation simulations. The results show that tacit safety knowledge acquired from work experience and injury exposure can significantly improve construction workers' hazard detection and visual search strategies. The results also demonstrate that (1) there is minimal difference, with or without the Occupational Safety and Health Administration 10-h certificate, in workers' search strategies and attentional patterns while exposed to or seeing hazardous situations; (2) relative to less experienced workers (<5 years), more experienced workers (>10 years) need less processing time and deploy more frequent short fixations on hazardous areas to maintain situational awareness of the environment; and (3) injury exposure significantly impacts a worker's visual search strategy and attentional allocation. In sum, practical safety knowledge and judgment on a jobsite requires the interaction of both tacit and explicit knowledge gained through work experience, injury exposure, and interactive safety training. This study significantly contributes to the literature by demonstrating the potential application of eye-tracking technology in studying the attentional allocation of construction workers. Regarding practice, the results of the study show that eye tracking can be used to improve worker training and preparedness, which will yield safer working conditions, detect at-risk workers, and improve the effectiveness of safety-training programs. |
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd Impact of construction workers' hazard identification skills on their visual attention Journal Article In: Journal of Construction Engineering and Management, vol. 143, no. 10, pp. 1–16, 2017. @article{Hasanzadeh2017, Eye-movement metrics have been shown to correlate with attention and, therefore, represent a means of identifying and analyzing an individual's cognitive processes. Human errors–such as failure to identify a hazard–are often attributed to a worker's lack of attention. Piecemeal attempts have been made to investigate the potential of harnessing eye movements as predictors of human error (e.g., failure to identify a hazard) in the construction industry, although more attempts have investigated human error via subjective measurements. To address this knowledge gap, the present study harnessed eye-tracking technology to evaluate the impacts of workers' hazard-identification skills on their attentional distributions and visual search strategies. To achieve this objective, an experiment was designed in which the eye movements of 31 construction workers were tracked while they searched for hazards in 35 randomly ordered construction scenario images. Workers were then divided into three groups on the basis of their hazard identification performance. Three fixation-related metrics–fixation count, dwell-time percentage, and run count–were analyzed during the eye-tracking experiment for each group (low, medium, and high hazard-identification skills) across various types of hazards. Then, multivariate ANOVA (MANOVA) was used to evaluate the impact of workers' hazard-identification skills on their visual attention. To further investigate the effect of hazard identification skills on the dependent variables (eye movement metrics), two distinct processes followed: separate ANOVAs on each of the dependent variables, and a discriminant function analysis. The analyses indicated that hazard identification skills significantly impact workers' visual search strategies: workers with higher hazard-identification skills had lower dwell-time percentages on ladder-related hazards; higher fixation counts on fall-to-lower-level hazards; and higher fixation counts and run counts on fall-protection systems, struck-by, housekeeping, and all hazardous areas combined. Among the eye-movement metrics studied, fixation count had the largest standardized coefficient in all canonical discriminant functions, which implies that this eye-movement metric uniquely discriminates workers with high hazard-identification skills and at-risk workers. Because discriminant function analysis is similar to regression, discriminant function (linear combinations of eye-movement metrics) can be used to predict workers' hazard-identification capabilities. In conclusion, this study provides a proof of concept that certain eye- movement metrics are predictive indicators of human error due to attentional failure. These outcomes stemmed from a laboratory setting, and, foreseeably, safety managers in the future will be able to use these findings to identify at-risk construction workers, pinpoint required safety training, measure training effectiveness, and eventually improve future personal protective equipment to measure construction workers' situation awareness in real time. |
S. A. Hassani; Mariann Oemisch; M. Balcarras; Stephanie Westendorff; S. Ardid; M. A. Meer; P. Tiesinga; T. Womelsdorf In: Scientific Reports, vol. 7, pp. 40606, 2017. @article{Hassani2017, Noradrenaline is believed to support cognitive flexibility through the alpha 2A noradrenergic receptor (a2A-NAR) acting in prefrontal cortex. Enhanced flexibility has been inferred from improved working memory with the a2A-NA agonist Guanfacine. But it has been unclear whether Guanfacine improves specific attention and learning mechanisms beyond working memory, and whether the drug effects can be formalized computationally to allow single subject predictions. We tested and confirmed these suggestions in a case study with a healthy nonhuman primate performing a feature-based reversal learning task evaluating performance using Bayesian and Reinforcement learning models. In an initial dose-testing phase we found a Guanfacine dose that increased performance accuracy, decreased distractibility and improved learning. In a second experimental phase using only that dose we examined the faster feature-based reversal learning with Guanfacine with single-subject computational modeling. Parameter estimation suggested that improved learning is not accounted for by varying a single reinforcement learning mechanism, but by changing the set of parameter values to higher learning rates and stronger suppression of non-chosen over chosen feature information. These findings provide an important starting point for developing nonhuman primate models to discern the synaptic mechanisms of attention and learning functions within the context of a computational neuropsychiatry framework. |
Taylor R. Hayes; John M. Henderson Scan patterns during real-world scene viewing predict individual differences in cognitive capacity Journal Article In: Journal of Vision, vol. 17, no. 5, pp. 1–17, 2017. @article{Hayes2017, From the earliest recordings of eye movements during active scene viewing to the present day, researchers have commonly reported individual differences in eye movement scan patterns under constant stimulus and task demands. These findings suggest viewer individual differences may be important for understanding gaze control during scene viewing. However, the relationship between scan patterns and viewer individual differences during scene viewing remains poorly understood because scan patterns are difficult to analyze. The present study uses a powerful technique called Successor Representation Scanpath Analysis (Hayes, Petrov, & Sederberg, 2011, 2015) to quantify the strength of the association between individual differences in scan patterns during real-world scene viewing and individual differences in viewer intelligence, working memory capacity, and speed of processing. The results of this analysis revealed individual differences in scan patterns that explained more than 40% of the variance in viewer intelligence and working memory capacity measures, and more than a third of the variance in speed of processing measures. The theoretical implications of our findings for models of gaze control and avenues for future individual differences research are discussed. |
Dana A. Hayward; Willa Voorhies; Jenna L. Morris; Francesca Capozzi; Jelena Ristic Staring reality in the face: A comparison of social attention across laboratory and real world measures suggests little common ground Journal Article In: Canadian Journal of Experimental Psychology, vol. 71, no. 3, pp. 212–225, 2017. @article{Hayward2017, The ability to attend to someone else's gaze is thought to represent 1 of the essential building blocks of the human sociocognitive system. This behavior, termed social attention, has traditionally been assessed using laboratory procedures in which participants' response time and/or accuracy performance indexes attentional function. Recently, a parallel body of emerging research has started to examine social attention during real life social interactions using naturalistic and observational methodologies. The main goal of the present work was to begin connecting these two lines of inquiry. To do so, here we operationalized, indexed, and measured the engagement and shifting components of social attention using covert and overt measures. These measures were obtained during an unconstrained real-world social interaction and during a typical laboratory social cuing task. Our results indicated reliable and overall similar indices of social attention engagement and shifting within each task. However, these measures did not relate across the 2 tasks. We discuss these results as potentially reflecting the differences in social attention mechanisms, the specificity of the cuing task's measurement, as well as possible general dissimilarities with respect to context, task goals, and/or social presence. |
Matthew Heath; Erin M. Shellington; Sam Titheridge; Dawn P. Gill; Robert J. Petrella In: Journal of Alzheimer's Disease, vol. 56, no. 1, pp. 167–183, 2017. @article{Heath2017, Exercise programs involving aerobic and resistance training (i.e., multiple-modality) have shown promise in improving cognition and executive control in older adults at risk, or experiencing, cognitive decline. It is, however, unclear whether cognitive training within a multiple-modality program elicits an additive benefit to executive/cognitive processes. This is an important question to resolve in order to identify optimal training programs that delay, or ameliorate, executive deficits in persons at risk for further cognitive decline. In the present study, individuals with a self-reported cognitive complaint (SCC) participated in a 24-week multiple-modality (i.e., the M2 group) exercise intervention program. In addition, a separate group of individuals with a SCC completed the same aerobic and resistance training as the M2 group but also completed a cognitive-based stepping task (i.e., multiple-modality, mind-motor intervention: M4 group). Notably, pre- and post-intervention executive control was examined via the antisaccade task (i.e., eye movement mirror-symmetrical to a target). Antisaccades are an ideal tool for the study of individuals with subtle executive deficits because of its hands- and language-free nature and because the task's neural mechanisms are linked to neuropathology in cognitive decline (i.e., prefrontal cortex). Results showed that M2 and M4 group antisaccade reaction times reliably decreased from pre- to post-intervention and the magnitude of the decrease was consistent across groups. Thus, multi-modality exercise training improved executive performance in persons with a SCC independent of mind-motor training. Accordingly, we propose that multiple-modality training provides a sufficient intervention to improve executive control in persons with a SCC. |
Jessica Heeman; Stefan Van der Stigchel; Jan Theeuwes The influence of distractors on express saccades Journal Article In: Journal of Vision, vol. 17, no. 1, pp. 1–17, 2017. @article{Heeman2017, It is well known that regular target-driven saccades are affected by the presence of close and remote distractors. Distractors close to the target affect the saccade landing position (known as the global effect), while remote distractors prolong saccade latencies to the target (known as the remote-distractor effect). Little is known about whether a different population of saccades known as express saccades (saccades with very short latencies between 80 and 130 ms) is similarly affected by close and remote distractors, as these saccades are considered to be the result of advanced preparation of an oculomotor program toward the target. We designed a task in which we were able to generate a large number of express saccades, as evidenced by a separate and very early peak in the saccade-latency distribution—a distribution that was different from that of regular saccades. Our results show that irrelevant and unexpected visual input had a large effect on express saccades. We found a global and a remote-distractor effect which were similar to those seen in regular saccades. Even though our findings confirm the existence of very-short-latency saccades in humans, it is questionable whether they represent a different population of saccades, as they were equally affected by the presence of distractors as are regular saccades. |
Christoph Helmchen; Jan Birger Kirchhoff; Martin Göttlich; Andreas Sprenger Postural ataxia in cerebellar downbeat nystagmus: Its relation to visual, proprioceptive and vestibular signals and cerebellar atrophy Journal Article In: PLoS ONE, vol. 12, no. 1, pp. e0168808, 2017. @article{Helmchen2017, Background: The cerebellum integrates proprioceptive, vestibular and visual signals for postural control. Cerebellar patients with downbeat nystagmus (DBN) complain of unsteadiness of stance and gait as well as blurred vision and oscillopsia. Objectives: The aim of this study was to elucidate the differential role of visual input, gaze eccentricity, vestibular and proprioceptive input on the postural stability in a large cohort of cerebellar patients with DBN, in comparison to healthy age-matched control subjects. Methods: Oculomotor (nystagmus, smooth pursuit eye movements) and postural (postural sway speed) parameters were recorded and related to each other and volumetric changes of the cerebellum (voxel-based morphometry, SPM). Results: Twenty-seven patients showed larger postural instability in all experimental conditions. Postural sway increased with nystagmus in the eyes closed condition but not with the eyes open. Romberg's ratio remained stable and was not different from healthy controls. Postural sway did not change with gaze position or graviceptive input. It increased with attenuated proprioceptive input and on tandem stance in both groups but Romberg's ratio also did not differ. Cerebellar atrophy (vermal lobule VI, VIII) correlated with the severity of impaired smooth pursuit eye movements of DBN patients. Conclusions: Postural ataxia of cerebellar patients with DBN cannot be explained by impaired visual feedback. Despite oscillopsia visual feedback control on cerebellar postural control seems to be preserved as postural sway was strongest on visual deprivation. The increase in postural ataxia is neither related to modulations of single components characterizing nystagmus nor to deprivation of single sensory (visual, proprioceptive) inputs usually stabilizing stance. Re-weighting of multisensory signals and/or inappropriate cerebellar motor commands might account for this postural ataxia. |
Jens R. Helmert; Claudia Symmank; Sebastian Pannasch; Harald Rohm Have an eye on the buckled cucumber: An eye tracking study on visually suboptimal foods Journal Article In: Food Quality and Preference, vol. 60, pp. 40–47, 2017. @article{Helmert2017, Waste is an ever growing problem in the food supply chain, starting in the production up to the consumers' households. A precondition for a consumer to purchase a product is to recognize it as an option in the first place. Therefore, in the present study, we investigated eye movement behavior on impeccable and visually suboptimal food items in a purchase or discard decision task. Additionally, in some trials price badges of the suboptimal food items were designed specifically in order to attract attention. Design changes included messages regarding price and taste, respectively, either presented in red or green. The results show that the design changes indeed attracted attention towards suboptimal food items in terms of time to first fixation, and also prolonged total fixation duration. However, only color yielded differences between the design variations, with red resulting in longer total fixation durations. Additionally, we inspected choice behavior towards visually suboptimal food items. As can be expected, purchase decisions declined for the suboptimal as compared to the impeccable items. However, when presented with differently designed price badges, a positive trend to purchase the suboptimal items was obtained. Our results show that price badge designs impact attention, cognitive processing, and finally also purchase decisions. Therefore, supplying visually suboptimal food in stores should be embedded into efforts to attract attention towards these products, as selling visually suboptimal food might positively impact waste balance in the food domain. |
Andrea Helo; Sandrien Ommen; Sebastian Pannasch; Lucile Danteny-Dordoigne; Pia Rämä Influence of semantic consistency and perceptual features on visual attention during scene viewing in toddlers Journal Article In: Infant Behavior and Development, vol. 49, pp. 248–266, 2017. @article{Helo2017, Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. |
John M. Henderson; Taylor R. Hayes Meaning-based guidance of attention in scenes as revealed by meaning maps Journal Article In: Nature Human Behaviour, vol. 1, no. 10, pp. 743–747, 2017. @article{Henderson2017, Real-world scenes comprise a blooming, buzzing confusion of information. To manage this complexity, visual attention is guided to important scene regions in real time1–7. What factors guide attention within scenes? A leading theoretical position suggests that visual salience based on semantically uninterpreted image features plays the critical causal role in attentional guidance, with knowledge and meaning playing a secondary or modulatory role8–11. Here we propose instead that meaning plays the dominant role in guiding human attention through scenes. To test this proposal, we developed ‘meaning maps' that represent the semantic richness of scene regions in a format that can be directly compared to image salience. We then contrasted the degree to which the spatial distributions of meaning and salience predict viewers' overt attention within scenes. The results showed that both meaning and salience predicted the distribution of attention, but that when the relationship between meaning and salience was controlled, only meaning accounted for unique variance in attention. This pattern of results was apparent from the very earliest time-point in scene viewing. We conclude that meaning is the driving force guiding attention through real-world scenes. |
Piril Hepsomali; Julie A. Hadwin; Simon P. Liversedge; Matthew Garner Pupillometric and saccadic measures of affective and executive processing in anxiety Journal Article In: Biological Psychology, vol. 127, pp. 173–179, 2017. @article{Hepsomali2017, Anxious individuals report hyper-arousal and sensitivity to environmental stimuli, difficulties concentrating, performing tasks efficiently and inhibiting unwanted thoughts and distraction. We used pupillometry and eye-movement measures to compare high vs. low anxious individuals hyper-reactivity to emotional stimuli (facial expressions) and subsequent attentional biases in a memory-guided pro- and antisaccade task during conditions of low and high cognitive load (short vs. long delay). High anxious individuals produced larger and slower pupillary responses to face stimuli, and more erroneous eye-movements, particularly following long delay. Low anxious individuals' pupillary responses were sensitive to task demand (reduced during short delay), whereas high anxious individuals' were not. These findings provide evidence in anxiety of enhanced, sustained and inflexible patterns of pupil responding during affective stimulus processing and cognitive load that precede deficits in task performance. |
James P. Herman; Richard J. Krauzlis Color-change detection activity in the primate superior colliculus Journal Article In: eNeuro, vol. 4, no. 2, pp. 1–16, 2017. @article{Herman2017, The primate superior colliculus (SC) is a midbrain structure that participates in the control of spatial attention. Previous studies examining the role of the SC in attention have mostly used luminance-based visual features (e.g., motion, contrast) as the stimuli and saccadic eye movements as the behavioral response, both of which are known to modulate the activity of SC neurons. To explore the limits of the SC's involvement in the control of spatial attention, we recorded SC neuronal activity during a task using color, a visual feature dimension not traditionally associated with the SC, and required monkeys to detect threshold-level changes in the saturation of a cued stimulus by releasing a joystick during maintained fixation. Using this color-based spatial attention task, we found substantial cue-related modulation in all categories of visually responsive neurons in the intermediate layers of the SC. Notably, near-threshold changes in color saturation, both increases and decreases, evoked phasic bursts of activity with magnitudes as large as those evoked by stimulus onset. This change-detection activity had two distinctive features: activity for hits was larger than for misses, and the timing of change-detection activity accounted for 67% of joystick release latency, even though it preceded the release by at least 200 ms. We conclude that during attention tasks, SC activity denotes the behavioral relevance of the stimulus regardless of feature dimension and that phasic event-related SC activity is suitable to guide the selection of manual responses as well as saccadic eye movements. |
Erno J. Hermans; Jonathan W. Kanen; Arielle Tambini; Guillén Fernández; Lila Davachi; Elizabeth A. Phelps In: Cerebral Cortex, vol. 27, no. 5, pp. 3028–3041, 2017. @article{Hermans2017, After encoding, memories undergo a process of consolidation that determines long-term retention. For conditioned fear, animal models postulate that consolidation involves reactivations of neuronal assemblies supporting fear learning during postlearning " offline " periods. However, no human studies to date have investigated such processes, particularly in relation to long-term expression of fear. We tested 24 participants using functional MRI on 2 consecutive days in a fear conditioning paradigm involving 1 habituation block, 2 acquisition blocks, and 2 extinction blocks on day 1, and 2 re-extinction blocks on day 2. Conditioning blocks were preceded and followed by 4.5-min rest blocks. Strength of spontaneous recovery of fear on day 2 served as a measure of long-term expression of fear. Amygdala connectivity primarily with hippocampus increased progressively during postacquisition and postextinction rest on day 1. Intraregional multi-voxel correlation structures within amygdala and hippocampus sampled during a block of differential fear conditioning furthermore persisted after fear learning. Critically, both these main findings were stronger in participants who exhibited spontaneous recovery 24 h later. Our findings indicate that neural circuits activated during fear conditioning exhibit persistent postlearning activity that may be functionally relevant in promoting consolidation of the fear memory. |
Ehab W. Hermena; Simon P. Liversedge; Denis Drieghe In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 3, pp. 451–471, 2017. @article{Hermena2017, The authors conducted 2 eye movement experiments in which they used the typographical and linguistic properties of Arabic to disentangle the influences of words' number of letters and spatial extent on measures of fixation duration and saccade targeting (Experiment 1), and to investigate the influence of initial bigram characteristics on saccade targeting during reading (Experiment 2). In the first experiment, through the use of a proportional font, which is more natural-looking in Arabic compared to monospaced fonts, the authors manipulated the number of letters (5 vs. 7) and the spatial extent (wide vs. narrow) of words embedded in frame sentences. The results obtained replicate and expand upon previous findings in other alphabetic languages that the number of letters influences fixation durations, whereas saccade targeting (as indicated by measures of fixation count and probability of skipping and refixation) is more influenced by the word's spatial extent. In the second experiment, the authors compared saccade targeting measures (saccade amplitude and initial fixation location) in 6- and 7-letter words beginning with initial bigrams that were of extremely high frequency ([character omitted] the), relatively high frequency ([character omitted] to/for the), or beginning with the letters of the word stem. The results showed negligible modulation of saccade targeting by initial bigram characteristics. The results also highlighted the importance of selecting the appropriate measures of initial fixation location (spatial vs. character-based measures) during reading text rendered using proportional fonts. |
Frouke Hermens The effects of social and symbolic cues on visual search: Cue shape trumps biological relevance Journal Article In: Psihologija, vol. 50, no. 2, pp. 117–140, 2017. @article{Hermens2017a, Arrow signs are often used in crowded environments such as airports to direct observers' attention to objects and areas of interest. Research with social and symbolic cues presented in isolation at fixation has suggested that social cues (such as eye gaze and pointing hands) are more effective in directing observers' attention than symbolic cues. The present work examines whether in visual search, social cues would therefore be more effective than arrows, by asking participants to locate target objects in crowded displays that were cued by eye-gaze, pointing hands or arrow cues. Results show an advantage for arrow cues, but only for arrow cues that stand out from the surroundings. The results confirm earlier suggestions that in extrafoveal vision cue shape trumps biological relevance. Eye movements suggest that these cueing effects rely predominantly on extrafoveal perception of the cues. |
Frouke Hermens The influence of social stigmas on observers' eye movements Journal Article In: Journal of Articles in Support of the Null Hypothesis, vol. 14, no. 1, pp. 1–18, 2017. @article{Hermens2017, Some social stigmas are associated with clear visual cues (facial scars, tattoos). Eye tracking has shown that such social stigmas influence the eye movements of other people. Other social stigmas often go without clearly visible cues (e.g., a mental illness or a criminal record). The present study investigates whether providing information about such stigmas affects eye movements of observers. Participants were presented with video clips and advance information about one of the actors that was either stigmatizing (related to mental health or a criminal past) or non-stigmatizing. The results show that eye movements towards the target actor were not systematically affected by stigmatizing advance information and were not associated with explicit attitudes from questionnaires. Results therefore suggest that stigmas without clear visual cues do not draw attention to or away from the person involved. |
Frouke Hermens; Markus Bindemann; A. Mike Burton Responding to social and symbolic extrafoveal cues: Cue shape trumps biological relevance Journal Article In: Psychological Research, vol. 81, no. 1, pp. 24–42, 2017. @article{Hermens2017b, Social cues presented at visual fixation have been shown to strongly influence an observer's attention and response selection. Here we ask whether the same holds for cues (initially) presented away from fixation, as cues are commonly perceived in natural vision. In six experiments, we show that extrafoveally presented cues with a distinct outline, such as pointing hands, rotated heads, and arrow cues result in strong cueing of responses (either to the cue itself, or a cued object). In contrast, cues without a clear outline, such as gazing eyes and direction words exert much weaker effects on participants' responses to a target cue. We also show that distraction effects on response times are relatively weak, but that strong interference effects can be obtained by measuring mouse trajectories. Eye tracking suggests that gaze cues are slower to respond to because their direction cannot easily be perceived in extrafoveal vision. Together, these data suggest that the strength of an extrafoveal cue is determined by the shape of the cue outline, rather than its biological relevance (i.e., whether the cue is provided by another human being), and that this shape effect is due to how easily the direction of a cue can be perceived in extrafoveal vision. |
Carl J. J. Herrmann; Ralf Metzler; Ralf Engbert A self-avoiding walk with neural delays as a model of fixational eye movements Journal Article In: Scientific Reports, vol. 7, pp. 12958, 2017. @article{Herrmann2017, Fixational eye movements show scaling behaviour of the positional mean-squared displacement with a characteristic transition from persistence to antipersistence for increasing time-lag. These statistical patterns were found to be mainly shaped by microsaccades (fast, small-amplitude movements). However, our re-analysis of fixational eye-movement data provides evidence that the slow component (physiological drift) of the eyes exhibits scaling behaviour of the mean-squared displacement that varies across human participants. These results suggest that drift is a correlated movement that interacts with microsaccades. Moreover, on the long time scale, the mean-squared displacement of the drift shows oscillations, which is also present in the displacement auto-correlation function. This finding lends support to the presence of time-delayed feedback in the control of drift movements. Based on an earlier non-linear delayed feedback model of fixational eye movements, we propose and discuss different versions of a new model that combines a self-avoiding walk with time delay. As a result, we identify a model that reproduces oscillatory correlation functions, the transition from persistence to antipersistence, and microsaccades. |
Natela Shanidze; Stephen J. Heinen; Preeti Verghese Monocular and binocular smooth pursuit in central field loss Journal Article In: Vision Research, vol. 141, pp. 181–190, 2017. @article{Shanidze2017, Macular degeneration results in heterogeneous central field loss (CFL) and often has asymmetrical effects in the two eyes. As such, it is not clear to what degree the movements of the two eyes are coordinated. To address this issue, we examined smooth pursuit quantitatively in CFL participants during binocular viewing and compared it to the monocular viewing case. We also examined coordination of the two eyes during smooth pursuit and how this coordination was affected by interocular ratios of acuity and contrast, as well as CFL-specific interocular differences, such as scotoma sizes and degree of binocular overlap. We hypothesized that the coordination of eye movements would depend on the binocularity of the two eyes. To test our hypotheses, we used a modified step-ramp paradigm, and measured pursuit in both eyes while viewing was binocular, or monocular with the dominant or non-dominant eye. Data for CFL participants and age-matched controls were examined at the group, within-group, and individual levels. We found that CFL participants had a broader range of smooth pursuit gains and a significantly lower correlation between the two eyes, as compared to controls. Across both CFL and control groups, smooth pursuit gain and correlation between the eyes are best predicted by the ratio of contrast sensitivity between the eyes. For the subgroup of participants with measurable stereopsis, both smooth pursuit gain and correlation are best predicted by stereoacuity. Therefore, our results suggest that coordination between the eyes during smooth pursuit depends on binocular cooperation between the eyes. |
Alon Shapira; Anna Sterkin; Moshe Fried; Oren Yehezkel; Zeev Zalevsky; Uri Polat Increased gamma band activity for lateral interactions in humans Journal Article In: PLoS ONE, vol. 12, no. 12, pp. e0187520, 2017. @article{Shapira2017, Collinear facilitation of contrast sensitivity supported by lateral interactions within primary visual cortex is implicated in contour and object perception, with neural correlates in several frequency bands. Although higher component of the ERP power spectrum, the gamma-band, is postulated to reflect object representation, attention and memory, its neuronal source has been questioned, suggesting it is an artifact reflecting saccadic eye movements. Here we explored the gamma-band activity during collinear facilitation with no saccade-related confounds. We used single-trial spectral analysis of ERP in occipital channels in a time-window of nearly complete saccadic suppression and discarded sporadic trials containing saccades, in order to avoid saccadic artifacts. Although converging evidence suggests that gamma-band oscillations emerge from local excitatory–inhibitory balance involving GABAergic inhibition, here we show activity amplification during facilitatory collinear interactions, presumably dominated by excitations, in the gamma-band 150–350 milliseconds following onset of low near-threshold contrast stimulus. This result highlights the potential role of gamma-band oscillations in neuronal encoding of basic processes in visual perception. Thus, our findings suggest that gamma-band ERP spectrum analysis may serve as a useful and reliable tool for exploring basic perception, both in normal adults and in special populations. |
Erin M. Shellington; Matthew Heath; Dawn P. Gill; Robert J. Petrella In: Journal of Alzheimer's Disease, vol. 58, no. 1, pp. 17–22, 2017. @article{Shellington2017, Adults (≥55 years) with self-reported cognitive complaints (sCC) were randomized to: multiple-modality exercise (M2), or multiple-modality plus mind-motor exercise (M4), for 24-weeks. Participants (n = 58) were assessed on antisaccade reaction time (RT) to examine executive-related oculomotor control and self-reported physical activity (PA) at pre-intervention (V0), post-intervention (V1), and 52-weeks follow-up (V2). We previously reported significant improvements in antisaccade RT of 23 ms at V1, in both groups. We now report maintenance of antisaccade RT improvement from V1 to V2, t(57) = 0.8 |
Annie L. Shelton; Kim M. Cornish; Meaghan Clough; Sanuji Gajamange; Scott Kolbe; Joanne Fielding Disassociation between brain activation and executive function in fragile X premutation females Journal Article In: Human Brain Mapping, vol. 38, no. 2, pp. 1056–1067, 2017. @article{Shelton2017, Executive dysfunction has been demonstrated among premutation (PM) carriers (55-199 CGG repeats) of the Fragile X mental retardation 1 (FMR1) gene. Further, alterations to neural activation patterns have been reported during memory and comparison based functional magnetic resonance imaging (fMRI) tasks in these carriers. For the first time, the relationships between fMRI neural activation during an interleaved ocular motor prosaccade/antisaccade paradigm, and concurrent task performance (saccade measures of latency, accuracy and error rate) in PM females were examined. Although no differences were found in whole brain activation patterns, regions of interest (ROI) analyses revealed reduced activation in the right ventrolateral prefrontal cortex (VLPFC) during antisaccade trials for PM females. Further, a series of divergent and group specific relationships were found between ROI activation and saccade measures. Specifically, for control females, activation within the right VLPFC and supramarginal gyrus correlated negatively with antisaccade latencies, while for PM females, activation within these regions was found to negatively correlate with antisaccade accuracy and error rate (right VLPFC only). For control females, activation within frontal and supplementary eye fields and bilateral intraparietal sulci correlated with prosaccade latency and accuracy; however, no significant prosaccade correlations were found for PM females. This exploratory study extends previous reports of altered prefrontal neural engagement in PM carriers, and clearly demonstrates dissociation between control and PM females in the transformation of neural activation into overt measures of executive dysfunction. |
Wei Shen; Qingqing Qu; Aiping Ni; Junyi Zhou; Xingshan Li The time course of morphological processing during spoken word recognition in Chinese Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 6, pp. 1957–1963, 2017. @article{Shen2017, We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese. |
Heather Sheridan; Eyal M. Reingold Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task Journal Article In: Journal of Vision, vol. 17, no. 3, pp. 1–12, 2017. @article{Sheridan2017, To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns. |
Martha M. Shiell; Robert J. Zatorre White matter structure in the right planum temporale region correlates with visual motion detection thresholds in deaf people Journal Article In: Hearing Research, vol. 343, pp. 64–71, 2017. @article{Shiell2017, The right planum temporale region is typically involved in higher-order auditory processing. After deafness, this area reorganizes to become sensitive to visual motion. This plasticity is thought to support compensatory enhancements to visual ability. In earlier work we showed that enhanced visual motion detection abilities in early-deaf people correlate with cortical thickness in a subregion of the right planum temporale. In the current study, we build on this earlier result by examining the relationship between enhanced visual motion detection ability and white matter structure in this area in the same sample. We used diffusion-weighted magnetic resonance imaging and extracted the measures of white matter structure from a region-of-interest just below the grey matter surface where cortical thickness correlates with visual motion detection ability. We also tested control regions-of-interest in the auditory and visual cortices where we did not expect to find a relationship between visual motion detection ability and white matter. We found that in the right planum temporale subregion, and in no other tested regions, fractional anisotropy, radial diffusivity, and mean diffusivity correlated with visual motion detection thresholds. We interpret this change as further evidence of a structural correlate of cross-modal reorganization after deafness. |
Sergei L. Shishkin; Darisii G. Zhao; Andrei V. Isachenko; Boris M. Velichkovsky Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction Journal Article In: Psychology in Russia: State of the Art, vol. 10, no. 3, pp. 120–137, 2017. @article{Shishkin2017, Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system's information transfer limits. Will brain-computer interfaces (BCIs) and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more efficient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important filter to separate intentional gaze dwells from non-intentional ones. conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI) interface will have a chance to enable natural, fluent, and effortless interaction with machines in various fields of application. |
Talia Shrem; Leon Y. Deouell Hierarchies of attention and experimental designs: Effects of spatial and intermodal attention revisited Journal Article In: Journal of Cognitive Neuroscience, vol. 29, no. 1, pp. 203–219, 2017. @article{Shrem2017, When attention is directed to stimuli in a given modality and location, information processing in other irrelevant modalities at this location is affected too. This spread of attention to irrelevant stimuli is often interpreted as superiority of location selection over modality selection. However, this conclusion is based on experimental paradigms in which spatial attention was transient whereas intermodal attention was sustained. Furthermore, whether modality selection affects processing in the task-relevant modality at irrelevant locations remains an open question. Here, we addressed effects of simultaneous spatial and intermodal attention in an EEG study using a balanced design where spatial attention was transient and intermodal attention sustained or vice versa. Effects of spatial attention were not affected by which modality was attended and effects of intermodal attention were not affected by whether the stimuli were at the attended location or not. This suggests not only spread of spatial attention to task- irrelevant modalities but also spread of intermodal attention to task-irrelevant locations. Whether spatial attention was transient or sustained did not alter the effect of spatial attention on visual N1 and Nd1 responses. Prestimulus preparatory occipital alpha band responses were affected by both transient and sustained spatial cueing, whereas late post-stimulus responses were more strongly affected by sustained than by transient spatial attention. Sustained but not transient intermodal attention affected late responses (>200 msec) to visual stimuli. Together, the results undermine the universal superiority of spatial attention and sug- gest that the mode of attention manipulation is an important factor determining attention effects. |