All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2017 |
Trevor Brothers; Liv J. Hoversten; Matthew J. Traxler Looking back on reading ahead: No evidence for lexical parafoveal-on-foveal effects Journal Article In: Journal of Memory and Language, vol. 96, pp. 9–22, 2017. @article{Brothers2017, Current models of eye movement control during reading make different predictions regarding the possibility of parafoveal-on-foveal effects – i.e. whether the lexical properties of upcoming, parafoveal words can affect reading time. To date, there have been contradictory findings from correlational corpus analyses and carefully controlled experimental studies regarding the existence of these effects. To address this controversy, we conducted four experimental studies (total N = 244) investigating the effects of parafoveal word frequency during natural reading. These experiments showed no evidence for parafoveal-on-foveal effects in an environment that should have been highly conducive to parallel lexical processing. In addition, a Bayes Factor meta-analysis of the prior experimental literature also provided clear support for the null hypothesis. These findings confirm the predictions of serial attention models such as E-Z Reader, and call into question the findings of previous correlational corpus studies. |
Sarah Brown-Schmidt; Joseph C. Toscano Gradient acoustic information induces long-lasting referential uncertainty in short discourses Journal Article In: Language, Cognition and Neuroscience, vol. 32, no. 10, pp. 1211–1228, 2017. @article{BrownSchmidt2017, Three experiments examined the influence of gradient acoustic information on referential interpretation during spoken language processing and how this influence persists over time. Acoustic continua varying between the pronouns “he” and “she” were created and validated in two offline experiments. A third experiment examined whether these acoustic differences influence online pronoun interpretation, and whether this influence persists across words in a discourse. Measures of eye gaze showed immediate sensitivity to graded acoustic information. Moreover, acoustically induced uncertainty persisted across a five-word delay: When listeners encountered a word that disambiguated the referent of the pronoun differently than it had originally been interpreted, the amount of time they took to recover from an initial misinterpretation was directly related to distance along the acoustic continuum between the pronoun and the endpoint corresponding to the correct referent.These findings show that fine-grained acoustic detail induces referential uncertainty that is maintained over extended periods of time. |
Kelly R. Bullock; Florian Pieper; Adam J. Sachs; Julio C. Martinez-Trujillo Visual and presaccadic activity in area 8Ar of the macaque monkey lateral prefrontal cortex Journal Article In: Journal of Neurophysiology, vol. 118, no. 1, pp. 15–28, 2017. @article{Bullock2017, Common trends observed in many visual and oculomotor-related cortical areas include retinotopically organized receptive and movement fields exhibiting a Gaussian shape and increasing size with eccentricity. These trends are demonstrated in the frontal eye fields (FEF), many visual areas, and the superior colliculus (SC), but have not been thoroughly characterized in prearcuate area 8Ar of the prefrontal cortex. This is important since area 8Ar, located anterior to the FEF, is more cytoarchitectonically similar to prefrontal areas than premotor areas. Here we recorded the responses of 166 neurons in area 8Ar of two male macaques while the animals made visually guided saccades to a peripheral sine-wave grating stimulus positioned at one of 40 possible locations (8 angles along 5 eccentricities). To characterize the neurons' receptive and movement fields, we fit a bivariate Gaussian model to the baseline-subtracted average firing rate during stimulus presentation (early and late visual epoch) and prior to saccade onset (presaccadic epoch). 121/166 neurons showed spatially selective visual and presaccadic responses. Of the visually selective neurons, 76% preferred the contralateral visual hemifield, whereas 24% preferred the ipsilateral hemifield. The angular width of visual and movement-related fields scaled positively with increasing eccentricity. Moreover, responses of neurons with visual receptive fields were modulated by target contrast exhibiting sigmoid tuning curves that resemble those of visual neurons in upstream areas such as MT and V4. Finally, we found that neurons with receptive fields at similar spatial locations were clustered within the area; however, this organization did not appear retinotopic. |
Antimo Buonocore; Alessio Fracasso; David Melcher Pre-saccadic perception: Separate time courses for enhancement and spatial pooling at the saccade target Journal Article In: PLoS ONE, vol. 12, no. 6, pp. e0178902, 2017. @article{Buonocore2017, We interact with complex scenes using eye movements to select targets of interest. Studies have shown that the future target of a saccadic eye movement is processed differently by the visual system. A number of effects have been reported, including a benefit for perceptual performance at the target (“enhancement”), reduced influences of backward masking (“un-masking”), reduced crowding (“un-crowding”) and spatial compression towards the saccade target. We investigated the time course of these effects by measuring orientation discrimination for targets that were spatially crowded or temporally masked. In four experiments, we varied the target-flanker distance, the presence of forward/backward masks, the orientation of the flankers and whether participants made a saccade. Masking and randomizing flanker orientation reduced performance in both fixation and saccade trials. We found a small improvement in performance on saccade trials, compared to fixation trials, with a time course that was consistent with a general enhancement at the saccade target. In addition, a decrement in performance (reporting the average flanker orientation, rather than the target) was found in the time bins nearest saccade onset when random oriented flankers were used, consistent with spatial pooling around the saccade target. We did not find strong evidence for un-crowding. Overall, our pattern of results was consistent with both an early, general enhancement at the saccade target and a later, peri-saccadic compression/pooling towards the saccade target. |
Antimo Buonocore; Simran Purokayastha; Robert D. McIntosh Saccade reorienting is facilitated by pausing the oculomotor program Journal Article In: Journal of Cognitive Neuroscience, vol. 29, no. 12, pp. 2068–2080, 2017. @article{Buonocore2017a, As we look around the world, selecting our targets, competing events may occur at other locations. Depending on current goals, the viewer must decide whether to look at new events or to ignore them. Two experimental paradigms formalize these response options: double-step saccades and saccadic inhibition. In the first, the viewer must reorient to a newly appearing target; in the second, they must ignore it. Until now, the relationship between reorienting and inhibition has been unexplored. In three experiments, we found saccadic inhibition ∼100 msec after a new target onset, regardless of the task instruction. Moreover, if this automatic inhibition is boosted by an irrelevant flash, reorienting is facilitated, suggesting that saccadic inhibition plays a crucial role in visual behavior, as a bottom–up brake that buys the time needed for decisional processes to act. Saccadic inhibition may be a ubiquitous pause signal that provides the flexibility for voluntary behavior to emerge. |
Kate Burleson-Lesser; Flaviano Morone; Paul DeGuzman; Lucas C. Parra; Hernán A. Makse Collective behaviour in video viewing: A thermodynamic analysis of gaze position Journal Article In: PLoS ONE, vol. 12, no. 1, pp. e0168995, 2017. @article{BurlesonLesser2017, Videos and commercials produced for large audiences can elicit mixed opinions. We wondered whether this diversity is also reflected in the way individuals watch the videos. To answer this question, we presented 65 commercials with high production value to 25 individuals while recording their eye movements, and asked them to provide preference ratings for each video. We find that gaze positions for the most popular videos are highly correlated. To explain the correlations of eye movements, we model them as ªinteractionsº between individuals. A thermodynamic analysis of these interactions shows that they approach a ªcritical º point such that any stronger interaction would put all viewers into lock-step and any weaker interaction would fully randomise patterns. At this critical point, groups with similar collective behaviour in viewing patterns emerge while maintaining diversity between groups. Our results suggest that popularity of videos is already evident in the way we look at them, and that we maintain diversity in viewing behaviour even as distinct patterns of groups emerge. Our results can be used to predict popularity of videos and commercials at the population level from the collective behaviour of the eye movements of a few viewers. |
David Buttelmann; Andy Schieler; Nicole Wetzel; Andreas Widmann Infants' and adults' looking behavior does not indicate perceptual distraction for constrained modelled actions − An eye-tracking study Journal Article In: Infant Behavior and Development, vol. 47, pp. 103–111, 2017. @article{Buttelmann2017, When observing a novel action, infants pay attention to the model's constraints when deciding whether to imitate this action or not. Gergely et al. (2002) found that more 14-month-olds copied a model's use of her head to operate a lamp when she used her head while her hands were free than when she had to use this means because it was the only means available to her (i.e., her hands were occupied). The perceptional distraction account (Beisert et al., 2012) claims that differences between conditions in terms of the amount of attention infants paid to the modeled action caused the differences in infants' performance between conditions. In order to investigate this assumption we presented 14-month-olds (N = 34) with an eye-tracking paradigm and analyzed their looking behavior when observing the head-touch demonstration in the two original conditions. Subsequently, they had the chance to operate the apparatus themselves, and we measured their imitative responses. In order to explore the perceptional processes taking place in this paradigm in adulthood, we also presented adults (N = 31) with the same task. Apart from the fact that we did not replicate the findings in imitation with our participants, the eye-tracking results do not support the perceptional distraction account: infants did not statistically differ − not even tendentially − in their amount of looking at the modeled action in both conditions. Adults also did not statistically differ in their looking at the relevant action components. However, both groups predominantly observed the relevant head action. Consequently, infants and adults do not seem to attend differently to constrained and unconstrained modelled actions. |
Laura Cacciamani; Erica Wager; Mary A. Peterson; Paige E. Scalf Age-related changes in perirhinal cortex sensitivity to configuration and part familiarity and connectivity to visual cortex Journal Article In: Frontiers in Aging Neuroscience, vol. 9, pp. 291, 2017. @article{Cacciamani2017, The perirhinal cortex (PRC) is a medial temporal lobe (MTL) structure known to be involved in assessing whether an object is familiar (i.e., meaningful) or novel. Recent evidence shows that the PRC is sensitive to the familiarity of both whole object configurations and their parts, and suggests the PRC may modulate part familiarity responses in V2. Here, using functional magnetic resonance imaging (fMRI), we investigated age-related decline in the PRC's sensitivity to part/configuration familiarity and assessed its functional connectivity to visual cortex in young and older adults. Participants categorized peripherally presented silhouettes as familiar ("real-world") or novel. Part/configuration familiarity was manipulated via three silhouette configurations: Familiar (parts/configurations familiar), Control Novel (parts/configurations novel), and Part-Rearranged Novel (parts familiar, configurations novel). "Real-world" judgments were less accurate than "novel" judgments, although accuracy did not differ between age groups. The fMRI data revealed differential neural activity, however: In young adults, a linear pattern of activation was observed in left hemisphere (LH) PRC, with Familiar > Control Novel > Part-Rearranged Novel. Older adults did not show this pattern, indicating age-related decline in the PRC's sensitivity to part/configuration familiarity. A functional connectivity analysis revealed a significant coupling between the PRC and V2 in the LH in young adults only. Older adults showed a linear pattern of activation in the temporopolar cortex (TPC), but no evidence of TPC-V2 connectivity. This is the first study to demonstrate age-related decline in the PRC's representations of part/configuration familiarity and its covariance with visual cortex. |
Tim H. W. Cornelissen; Melissa L. -H. Võ Stuck on semantics: Processing of irrelevant object-scene inconsistencies modulates ongoing gaze behavior Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 1, pp. 154–168, 2017. @article{Cornelissen2017, People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics-let alone their semantic congruity-processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior. |
Benjamin W. Corrigan; Roberto A. Gulli; Guillaume Doucet; Julio C. Martinez-Trujillo Characterizing eye movement behaviors and kinematics of non-human primates during virtual navigation tasks Journal Article In: Journal of Vision, vol. 17, no. 12, pp. 1–22, 2017. @article{Corrigan2017, Virtual environments (VE) allow testing complex behaviors in naturalistic settings by combining highly controlled visual stimuli with spatial navigation and other cognitive tasks. They also allow for the recording of eye movements using high-precision eye tracking techniques, which is important in electrophysiological studies examining the response properties of neurons in visual areas of nonhuman primates. However, during virtual navigation, the pattern of retinal stimulation can be highly dynamic which may influence eye movements. Here we examine whether and how eye movement patterns change as a function of dynamic visual stimulation during virtual navigation tasks, relative to standard oculomotor tasks. We trained two rhesus macaques to use a joystick to navigate in a VE to complete two tasks. To contrast VE behavior with classic measurements, the monkeys also performed a simple Cued Saccade task. We used a robust algorithm for rapid classification of saccades, fixations, and smooth pursuits. We then analyzed the kinematics of saccades during all tasks, and specifically during different phases of the VE tasks. We found that fixation to smooth pursuit ratios were smaller in VE tasks (4:5) compared to the Cued Saccade task (7:1), reflecting a more intensive use of smooth pursuit to foveate targets in VE than in a standard visually guided saccade task or during spontaneous fixations. Saccades made to rewarded targets (exploitation) tended to have increased peak velocities compared to saccades made to unrewarded objects (exploration). VE exploitation saccades were 6% slower than saccades to discrete targets in the Cued Saccade task. Virtual environments represent a technological advance in experimental design for nonhuman primates. Here we provide a framework to study the ways that eye movements change between and within static and dynamic displays. |
Francisco M. Costela; Sidika Kajtezovic; Russell L. Woods The preferred retinal locus used to watch videos Journal Article In: Investigative Ophthalmology & Visual Science, vol. 58, no. 14, pp. 6073–6081, 2017. @article{Costela2017a, Purpose: Eccentric viewing is a common strategy used by people with central vision loss (CVL) to direct the eye such that the image falls onto functioning peripheral retina, known as the preferred retinal locus (PRL). It has been long acknowledged that we do not know whether the PRL used in a fixation test is also used when performing tasks. We present an innovative method to determine whether the same PRL observed during a fixation task was used to watch videos and whether poor resolution affects gaze location.; Methods: The gaze of a group of 60 normal vision (NV) observers was used to define a democratic center of interest (COI) of video clips from movies and television. For each CVL participant (N = 20), we computed the gaze offsets from the COI across the video clips. The distribution of gaze offsets of the NV participants was used to define the limits of NV behavior. If the gaze offset was within this 95% degree confidence interval, we presumed that the same PRL was used for fixation and video watching. Another 15 NV participants watched the video clips with various levels of defocus blur.; Results: CVL participants had wider gaze-offset distributions than NV participants (P < 0.001). Gaze offsets of 18/20 CVL participants were outside the NV confidence interval. Further, none of the 15 NV participants watching the same videos with spherical defocus blur had a gaze offset that was decentered (outside the NV confidence interval), suggesting that resolution was not the problem.; Conclusions: This indicates that many CVL participants were using a PRL to view videos that differed from that found with a fixation task and that it was not caused by poor resolution alone. The relationship between these locations needs further investigation. |
Francisco M. Costela; Michael B. McCamy; Mary Coffelt; Jorge Otero-Millan; Stephen L. Macknik; Susana Martinez-Conde Changes in visibility as a function of spatial frequency and microsaccade occurrence Journal Article In: European Journal of Neuroscience, vol. 45, no. 3, pp. 433–439, 2017. @article{Costela2017, Fixational eye movements (FEM), including microsaccades, drift, and tremor, shift our eye position during ocular fixation, producing retinal motion that is thought to help visibility by counteracting neural adaptation to unchanging stimulation. Yet, how each FEM type influences this process is still debated. Recent studies found little to no relationship between microsaccades and visual perception of spatial frequencies (SF), and concluded that any effects microsaccades may have on vision do not extend to the SF domain. However, these conclusions were based on coarse analyses that make it hard to appreciate the actual effects of microsaccades on target visibility as a function of SF. Thus, how microsaccades contribute to the visibility of stimuli of different SFs remains unclear. Here we asked how the visibility of targets of various SFs changed over time, in relationship with concurrent microsaccade production. Participants continuously reported on changes in target visibility, allowing us to time-lock ongoing changes in microsaccade parameters to perceptual transitions in visibility. Microsaccades restored/increased the visibility of low SF targets more efficiently than that of high SF targets. Yet, microsaccade rates rose before periods of increased visibility, and dropped before periods of diminished visibility, suggesting that microsaccades boosted target visibility across a wide range of SFs. Our data also indicate that visual stimuli fade/become harder to see less often in the presence of microsaccades. In addition, larger microsaccades restored/increased target visibility more effectively than smaller microsaccades. These combined results support the proposal that microsaccades enhance visibility across a broad variety of SFs. |
Jenna Course-Choi; Harry Saville; Nazanin Derakshan The effects of adaptive working memory training and mindfulness meditation training on processing efficiency and worry in high worriers Journal Article In: Behaviour Research and Therapy, vol. 89, pp. 1–13, 2017. @article{CourseChoi2017, Worry is the principle characteristic of generalised anxiety disorder, and has been linked to deficient attentional control, a main function of working memory (WM). Adaptive WM training and mindfulness meditation practice (MMP) have both shown potential to increase attentional control. The present study hence investigates the individual and combined effects of MMP and a dual adaptive n-back task on a non-clinical, randomised sample of high worriers. 60 participants were tested before and after seven days of training. Assessment included self-report questionnaires, as well as performance tasks measuring attentional control and working memory capacity. Combined training resulted in continued reduction in worry in the week after training, highlighting the potential of utilising n-back training as an adjunct to established clinical treatment. Engagement with WM training correlated with immediate improvements in attentional control and resilience, with worry decreasing over time. Implications of these findings and suggestions for future research are discussed. |
Matt Craddock; Frank Oppermann; Matthias M. Müller; Jasna Martinovic Modulation of microsaccades by spatial frequency during object categorization Journal Article In: Vision Research, vol. 130, pp. 48–56, 2017. @article{Craddock2017, The organization of visual processing into a coarse-to-fine information processing based on the spatial frequency properties of the input forms an important facet of the object recognition process. During visual object categorization tasks, microsaccades occur frequently. One potential functional role of these eye movements is to resolve high spatial frequency information. To assess this hypothesis, we examined the rate, amplitude and speed of microsaccades in an object categorization task in which participants viewed object and non-object images and classified them as showing either natural objects, man-made objects or non-objects. Images were presented unfiltered (broadband; BB) or filtered to contain only low (LSF) or high spatial frequency (HSF) information. This allowed us to examine whether microsaccades were modulated independently by the presence of a high-level feature – the presence of an object – and by low-level stimulus characteristics – spatial frequency. We found a bimodal distribution of saccades based on their amplitude, with a split between smaller and larger microsaccades at 0.4° of visual angle. The rate of larger saccades (⩾0.4°) was higher for objects than non-objects, and higher for objects with high spatial frequency content (HSF and BB objects) than for LSF objects. No effects were observed for smaller microsaccades (<0.4°). This is consistent with a role for larger microsaccades in resolving HSF information for object identification, and previous evidence that more microsaccades are directed towards informative image regions. |
Hayley Crawford; Joanna Moss; Chris Oliver; Deborah M. Riby Differential effects of anxiety and autism on social scene scanning in males with fragile X syndrome Journal Article In: Journal of Neurodevelopmental Disorders, vol. 9, pp. 1–10, 2017. @article{Crawford2017a, BACKGROUND: Existing literature draws links between social attention and socio-behavioural profiles in neurodevelopmental disorders. Fragile X syndrome (FXS) is associated with a known socio-behavioural phenotype of social anxiety and social communication difficulties alongside high social motivation. However, studies investigating social attention in males with FXS are scarce. Using eye tracking, this study investigates social attention and its relationship with both anxiety and autism symptomatology in males with FXS. METHODS: We compared dwell times to the background, body, and face regions of naturalistic social scenes in 11 males with FXS (M age = 26.29) and 11 typically developing (TD) children who were matched on gender and receptive language ability (M age = 6.28). Using informant-report measures, we then investigated the relationships between social scene scanning and anxiety, and social scene scanning and social communicative impairments. RESULTS: Males with FXS did not differ to TD children on overall dwell time to the background, body, or face regions of the naturalistic social scenes. Whilst males with FXS displayed developmentally 'typical' social attention, increased looking at faces was associated with both heightened anxiety and fewer social communication impairments in this group. CONCLUSIONS: These results offer novel insights into the mechanisms associated with social attention in FXS and provide evidence to suggest that anxiety and autism symptomatology, which are both heightened in FXS, have differential effects on social attention |
Trevor J. Crawford; Eleanor S. Smith; Donna M. Berry Eye gaze and aging: Selective and combined effects of working memory and inhibitory control Journal Article In: Frontiers in Human Neuroscience, vol. 11, pp. 563, 2017. @article{Crawford2017, Eye-tracking is increasingly studied as a cognitive and biological marker for the early signs of neuropsychological and psychiatric disorders. However, in order to make further progress, a more comprehensive understanding of the age-related effects on eye- tracking is essential. The antisaccade task requires participants to make saccadic eye movements away from a prepotent stimulus. Speculation on the cause of the observed age-related differences in the antisaccade task largely centers around two sources of cognitive dysfunction: inhibitory control (IC) and working memory (WM). The IC account views cognitive slowing and task errors as a direct result of the decline of inhibitory cognitive mechanisms. An alternative theory considers that a deterioration of WM is the cause of these age-related effects on behavior. The current study assessed IC and WM processes underpinning saccadic eye movements in young and older participants. This was achieved with three experimental conditions that systematically varied the extent to which WM and IC were taxed in the antisaccade task: a memory-guided task was used to explore the effect of increasing the WM load; a Go/No-Go task was used to explore the effect of increasing the inhibitory load; a ‘standard' antisaccade task retained the standard WM and inhibitory loads. Saccadic eye movements were also examined in a control condition: the standard prosaccade task where the load of WM and IC were minimal or absent. Saccade latencies, error rates and the spatial accuracy of saccades of older participants were compared to the same measures in healthy young controls across the conditions. The results revealed that aging is associated with changes in both IC and WM. Increasing the inhibitory load was associated with increased reaction times in the older group, while the increased WM load and the inhibitory load contributed to an increase in the antisaccade errors. These results reveal that aging is associated with changes in both IC and WM. |
Kate Crookes; Gillian Rhodes Poor recognition of other-race faces cannot always be explained by a lack of effort Journal Article In: Visual Cognition, vol. 25, no. 4-6, pp. 430–441, 2017. @article{Crookes2017, People are generally better at recognizing own-race than other-race faces. This "other-race effect" is very well established although the underlying causes are much debated. Social-cognitive accounts argue that the other-race effect stems from a lack of motivation to individuate other-race faces, whereas perceptual expertise accounts argue that it reflects the tuning of face-processing mechanisms by experience to own-race faces. We investigated the effort people apply to recognize own-race and other-race faces. Caucasian participants completed the Australian and Chinese Cambridge Face Memory Tasks, once with the standard timing and once with self-paced study phases. If people are less motivated to recognize other-race faces they should apply less effort, that is, when given control over viewing times they should spend less time studying other-race than own-race faces. Contrary to social-cognitive accounts, there was no evidence of reduced effort for other-race faces. Participants did not spend less time studying other-race than own-race faces in the self-paced condition. Moreover, participants reported applying significantly more effort to telling apart other-race than own-race faces. These results are not consistent with reduced motivation to individuate other-race faces. Thus, they appear more consistent with perceptual expertise rather than social-cognitive accounts of the other-race effect.Copyright ©2017 Informa UK Limited, trading as Taylor & Francis Group. |
Damian Cruse; Marco Fattizzo; Adrian M. Owen; Davinia Fernández-Espejo Why use a mirror to assess visual pursuit in prolonged disorders of consciousness? Evidence from healthy control participants Journal Article In: BMC Neurology, vol. 17, pp. 1–5, 2017. @article{Cruse2017, Background: Evidence of reliable smooth visual pursuit is crucial for both diagnosis and prognosis in prolonged disorders of consciousness (PDOC). However, a mirror is more likely than an object to elicit evidence of smooth pursuit. Our objective was to identify the physiological and/or cognitive mechanism underlying the mirror benefit. Methods: We recorded eye-movements while healthy participants simultaneously completed a visual pursuit task and a cognitively demanding two-back task. We manipulated the stimulus to be pursued (two levels: mirror, ball) and the simultaneous cognitive load (pursuit only, pursuit plus two-back task) within subjects. Results: Pursuit of the reflected-own-face in the mirror was associated with briefer fixations that occurred less uniformly across the horizontal plane relative to object pursuit. Secondary task performance did not differ between pursuit stimuli. The secondary task also did not affect eye movement measures, nor did it interact with pursuit stimulus. Conclusions: Reflected-own-face pursuit is no less cognitively demanding than object pursuit, but it naturally elicits smoother eye movements (i.e. briefer pauses to fixate). A mirror therefore provides greater sensitivity to detect smooth visual pursuit in PDOC because the naturally smoother eye movements may be identified more confidently by the assessor. |
Eric Castet; Marine Descamps; Ambre Denis-Noël; Pascale Colé Letter and symbol identification: No evidence for letter-specific crowding mechanisms Journal Article In: Journal of Vision, vol. 17, no. 11, pp. 1–19, 2017. @article{Castet2017, It has been proposed that letters, as opposed to symbols, trigger specialized crowding processes, boosting identification of the first and last letters of words. This hypothesis is based on evidence that single-letter accuracy as a function of within-string position has a W shape (the classic serial position function [SPF] in psycholinguistics) whereas an inverted V shape is obtained when measured with symbols. Our main goal was to test the robustness of the latter result. Our hypothesis was that any letter/symbol difference might result from short-term visual memory processes (due to the partial report [PR] procedures used in SPF studies) rather than from crowding. We therefore removed the involvement of short-term memory by precueing target- item position and compared SPFs with precueing and postcueing. Perimetric complexity was stringently matched between letters and symbols. In postcueing conditions similar to previous studies, we did not reproduce the inverted V shape for symbols: Clear-cut W shapes were observed with an overall smaller accuracy for symbols compared to letters. This letter/symbol difference was dramatically reduced in precueing conditions in keeping with our prediction. Our results are not consistent with the claim that letter strings trigger specialized crowding processes. We argue that PR procedures are not fit to isolate crowding processes. |
Matthew R. Cavanaugh; Krystel R. Huxlin Visual discrimination training improves Humphrey perimetry in chronic cortically induced blindness Journal Article In: Neurology, vol. 88, pp. 1856–1864, 2017. @article{Cavanaugh2017, Objective: To assess if visual discrimination training improves performance on visual perimetry tests in chronic stroke patients with visual cortex involvement. Methods: 24-2 and 10-2 Humphrey visual fields were analyzed for 17 chronic cortically blind stroke patients prior to and following visual discrimination training, as well as in 5 untrained, cortically blind controls. Trained patients practiced direction discrimination, orientation discrimination, or both, at nonoverlapping, blind field locations. All pretraining and posttraining discrimination performance and Humphrey fields were collected with online eye tracking, ensuring gaze-contingent stimulus presentation. Results: Trained patients recovered ;108 degrees2 of vision on average, while untrained patients spontaneously improved over an area of ;16 degrees2 . Improvement was not affected by patient age, time since lesion, size of initial deficit, or training type, but was proportional to the amount of training performed. Untrained patients counterbalanced their improvements with worsening of sensi- tivity over;9 degrees2 of their visual field.Worsening wasminimal in trained patients. Finally, although discrimination performance improved at all trained locations, changes inHumphrey sensitivity occurred both within trained regions and beyond, extending over a larger area along the blind field border. Conclusions: In adults with chronic cortical visual impairment, the blind field border appears to have enhanced plastic potential, which can be recruited by gaze-controlled visual discrimination training to expand the visible field. Our findings underscore a critical need for future studies to measure the effects of vision restoration approaches on perimetry in larger cohorts of patients. |
Bhismadev Chakrabarti; Anthony Haffey; Loredana Canzano; Christopher P. Taylor; Eugene McSorley Individual differences in responsivity to social rewards: Insights from two eye-tracking tasks Journal Article In: PLoS ONE, vol. 12, no. 10, pp. e0185146, 2017. @article{Chakrabarti2017, Humans generally prefer social over nonsocial stimuli from an early age. Reduced preference for social rewards has been observed in individuals with autism spectrum conditions (ASC). This preference has typically been noted in separate tasks that measure orienting toward and engaging with social stimuli. In this experiment, we used two eye-tracking tasks to index both of these aspects of social preference in in 77 typical adults. We used two measures, global effect and preferential looking time. The global effect task measures saccadic deviation toward a social stimulus (related to ‘orienting'), while the preferential looking task records gaze duration bias toward social stimuli (relating to ‘engaging'). Social rewards were found to elicit greater saccadic deviation and greater gaze duration bias, suggesting that they have both greater salience and higher value compared to nonsocial rewards. Trait empathy was positively correlated with the measure of relative value of social rewards, but not with their salience. This study thus elucidates the relationship of empathy with social reward processing. |
Mrinmoy Chakrabarty; Tamami Nakano; Shigeru Kitazawa Short-latency allocentric control of saccadic eye movements Journal Article In: Journal of Neurophysiology, vol. 117, no. 1, pp. 376–387, 2017. @article{Chakrabarty2017, It is generally accepted that the neural circuits that are implicated in saccade control use retinotopically coded target locations. However, several studies have revealed that nonretinotopic representation is also used. This idea raises a question about whether nonretinotopic coding is egocentric (head or body centered) or allocentric (environment centered). In the current study, we hypothesized that allocentric coding may play a crucial role in immediate saccade control. To test this hypothesis, we used an immediate double-step saccade task toward two sequentially flashed targets with a frame in the background, and we examined whether the end point of the second saccade was affected by a transient shift of the background that participants were told to ignore. When the background was shifted transiently upward (or downward) during the flash of the second target, the second saccade generally erred the target downward (or upward), which was in the direction opposite to the shift of the background. The effect on the second saccade became significant within 150 ms after the frame was presented for decoding and was built up for 200 ms thereafter. When the second saccade was not adjusted, a small, corrective saccade followed within 300 ms. The effect scaled linearly with the shift size up to 3° for a noncorrective second saccade and up to 6° for a corrective saccade. The present results show that an allocentric location of a target is rapidly represented by the brain and used for controlling saccades. NEW & NOTEWORTHY We found that the saccade end point was shifted from the actual target position toward the direction expected from allocentric coding when a large frame in the background was transiently shifted during the period of target presentation. The effect occurred within 150 ms. The present study provides direct evidence that the brain rapidly uses allocentric coding of a target to control immediate saccades. |
Jason L. Chan; Michael J. Koval; Kevin D. Johnston; Stefan Everling Neural correlates for task switching in the macaque superior colliculus Journal Article In: Journal of Neurophysiology, vol. 118, pp. 2156–2170, 2017. @article{Chan2017, Successful task switching requires a network of brain areas to select, maintain, implement, and execute the appropriate task. Although frontoparietal brain areas are thought to play a critical role in task switching by selecting and encoding task rules and exerting top-down control, how brain areas closer to the execution of tasks participate in task switching is unclear. The superior colliculus (SC) integrates information from various brain areas to generate saccades and is likely influenced by task switching. Here, we investigated switch costs in nonhuman primates and their neural correlates in the activity of SC saccade-related neurons in monkeys performing cued, randomly interleaved pro- and anti-saccade trials. We predicted that behavioral switch costs would be associated with differential modulations of SC activity in trials on which the task was switched vs. repeated, with activity on the current trial resembling that associated with the task set of the previous trial when a switch occurred. We observed both error rate and reaction time switch costs and changes in the discharge rate and timing of activity in SC neurons between switch and repeat trials. These changes were present later in the task only after fixation on the cue stimuli but before saccade onset. These results further establish switch costs in macaque monkeys and suggest that SC activity is modulated by task-switching processes in a manner inconsistent with the concept of task set inertia. |
Vassiki Chauhan; Matteo Visconti di Oleggio Castello; Alireza Soltani; M. Ida Gobbini Social saliency of the cue slows attention shifts Journal Article In: Frontiers in Psychology, vol. 8, pp. 738, 2017. @article{Chauhan2017, Eye gaze is a powerful cue that indicates where another person's attention is directed in the environment. Seeing another person's eye gaze shift spontaneously and reflexively elicits a shift of one's own attention to the same region in space. Here, we investigated whether reallocation of attention in the direction of eye gaze is modulated by personal familiarity with faces. On the one hand, the eye gaze of a close friend should be more effective in redirecting our attention as compared to the eye gaze of a stranger. On the other hand, the social relevance of a familiar face might itself hold attention and, thereby, slow lateral shifts of attention. To distinguish between these possibilities, we measured the efficacy of the eye gaze of personally familiar and unfamiliar faces as directional attention cues using adapted versions of the Posner paradigm with saccadic and manual responses. We found that attention shifts were slower when elicited by a perceived change in the eye gaze of a familiar individual as compared to attention shifts elicited by unfamiliar faces at short latencies (100 ms). We also measured simple detection of change in direction of gaze in personally familiar and unfamiliar faces to test whether slower attention shifts were due to slower detection. Participants detected changes in eye gaze faster for familiar faces than for unfamiliar faces. Our results suggest that personally familiar faces briefly hold attention due to their social relevance, thereby slowing shifts of attention, even though the direction of eye movements are detected faster in familiar faces. |
Romain Chaumillon; Nadia Alahyane; Patrice Senot; Judith Vergne; Christelle Lemoine-Lardennois; Jean Blouin; Karine Doré-Mazars; Alain Guillaume; Dorine Vergilino-Perez Asymmetry in visual information processing depends on the strength of eye dominance Journal Article In: Neuropsychologia, vol. 96, pp. 129–136, 2017. @article{Chaumillon2017, Unlike handedness, sighting eye dominance, defined as the eye unconsciously chosen when performing monocular tasks, is very rarely considered in studies investigating cerebral asymmetries. We previously showed that sighting eye dominance has an influence on visually triggered manual action with shorter reaction time (RT) when the stimulus appears in the contralateral visual hemifield with respect to the dominant eye (Chaumillon et al. 2014). We also suggested that eye dominance may be more or less pronounced depending on individuals and that this eye dominance strength could be evaluated through saccadic peak velocity analysis in binocular recordings (Vergilino-Perez et al. 2012). Based on these two previous studies, we further examine here whether the strength of the eye dominance can modulate the influence of this lateralization on manual reaction time. Results revealed that participants categorized as having a strong eye dominance, but not those categorized as having a weak eye dominance, exhibited the difference in RT between the two visual hemifields. This present study reinforces that the analysis of saccade peak velocity in binocular recordings provides an effective tool to better categorize the eye dominance. It also shows that the influence of eye dominance in visuo-motor tasks depends on its strength. Our study also highlights the importance of considering the strength of eye dominance in future studies dealing with brain lateralization. |
Fuguo Chen; Jie Liu; Shuanghong Chen; Hong Chen; Xiao Gao Eye movement study on attention bias to body height stimuli in height dissatisfied males Journal Article In: Frontiers in Psychology, vol. 8, pp. 2209, 2017. @article{Chen2017, The present study investigated attention bias in response to height-related words among young men in China. 47 [26 high height dissatisfied (HHD) and 21 low height dissatisfied (LHD)] men performed a dot-probe task. Eye movement (EM) recordings showed that compared to LHD men, HHD men had an avoidance bias in response to height-related words, which was revealed by less frequent first fixations on both tall-related and short-related words, and showed significantly shorter first fixations on short-related words. There was no other significant difference in EM indices (i.e., first fixation latency and gaze duration) between two groups. In addition, HHD participants were significantly slower than LHD participants when responding to probes preceded by short-related words, while there was no difference when probes were preceded by tall-related or neutral words. In sum, the present results indicate that HHD men selectively avoid cues related to short height. |
Jing Chen; Matteo Valsecchi; Karl R. Gegenfurtner Attention is allocated closely ahead of the target during smooth pursuit eye movements: Evidence from EEG frequency tagging Journal Article In: Neuropsychologia, vol. 102, pp. 206–216, 2017. @article{Chen2017c, It is under debate whether attention during smooth pursuit is centered right on the pursuit target or allocated preferentially ahead of it. Attentional deployment was previously probed using a secondary task, which might have altered attention allocation and led to inconsistent findings. We measured frequency-tagged steady-state visual evoked potentials (SSVEP) to measure attention allocation in the absence of any secondary probing task. The observers pursued a moving dot while stimuli flickering at different frequencies were presented at various locations ahead or behind the pursuit target. We observed a significant increase in EEG power at the flicker frequency of the stimulus in front of the pursuit target, compared to the frequency of the stimulus behind. When testing many different locations, we found that the enhancement was detectable up to about 1.5° ahead during pursuit, but vanished by 3.5°. In a control condition using attentional cueing during fixation, we did observe an enhanced EEG response to stimuli at this eccentricity, indicating that the focus of attention during pursuit is narrower than allowed for by the resolution of the attentional system. In a third experiment, we ruled out the possibility that the SSVEP enhancement was a byproduct of the catch-up saccades occurring during pursuit. Overall, we showed that attention is on average allocated ahead of the pursuit target during smooth pursuit. EEG frequency tagging seems to be a powerful technique that allows for the investigation of attention/perception implicitly when an overt task would be confounding. |
Jing Chen; Matteo Valsecchi; Karl R. Gegenfurtner Enhanced brain responses to color during smooth-pursuit eye movements Journal Article In: Journal of Neurophysiology, vol. 118, pp. 749–754, 2017. @article{Chen2017a, Eye movements alter visual perceptions in a number of ways. During smooth pursuit eye movements, previous studies reported decreased detection threshold for colored stimuli and for high-spatial-frequency luminance stimuli, suggesting a boost in the parvocellular system. The present study investigated the underlying neural mechanism using EEG in human participants. Participants followed a moving target with smooth pursuit eye movements while steady-state visually Evoked potentials (SSVEPs) were elicited by equiluminant red-green flickering gratings in the background. SSVEP responses to color gratings were 18.9% higher during smooth pursuit than during fixation. There was no enhancement of SSVEPs by smooth pursuit when the flickering grating was defined by luminance instead of color. This result provides physiological evidence that the chromatic response in the visual system is boosted by the execution of smooth pursuit eye movements in humans. Since the response improvement is thought to be due to an improved response in the parvocellular system, SSVEPs to equiluminant stimuli could provide a direct test of parvocellular signaling, especially in populations where an explicit behavioral response from the participant is not feasible. |
Anna B. Cieślicka; Roberto R. Heredia How to "save your skin" when processing L2 idioms: An eye movement analysis of idiom transparency and cross-language similarity among bilinguals Journal Article In: Iranian Journal of Language Teaching Research, vol. 5, no. 3, pp. 81–107, 2017. @article{Cieslicka2017, The current study looks at whether bilinguals varying in language dominance show a processing advantage for idiomatic over non-idiomatic phrases and to what extent this effect is modulated by idiom transparency (i.e., the degree to which the idiom's figurative meaning can be inferred from its literal analysis) and cross-language similarity (i.e., the extent to which an idiom has an identical translation equivalent in another language). An eye tracking experiment was conducted in which Spanish-English bilinguals were presented with literally plausible (i.e., idioms that can be interpreted both figuratively and literally) transparent (e.g., break the ice, where the figurative meaning can be deduced from analyzing the idiom literally) and opaque idioms (e.g., hit the sack, where the meaning cannot be inferred from idiom constituents). Idioms varied along the dimension of cross-language similarity, with half the idioms having word for word translation equivalents in English and Spanish and another half being different, that is, having no similar counterpart in another language. Each idiom was used either in its literal (e.g., get cold feet: become cold) or figurative meaning (e.g., get cold feet: become afraid). In control phrases the last word of the idiom was replaced by a carefully matched control (e.g., get cold hands). Reading measures (fixation count, first pass/gaze reading time and total reading time) revealed that cross-language similarity interacts in an important way with idiom transparency, such that opaque idioms were more difficult to process than transparent ones, and different transparent idioms took faster to process than similar transparent idioms. Results are discussed with regard to the holistic vs. compositional views of idiom storage and the role of activated L1 (first language) knowledge in the course of L2 (second language) figurative processing. |
Helen E. Clark; John A. Perrone; Robert B. Isler; Samuel G. Charlton Fixating on the size-speed illusion of approaching railway trains: What we can learn from our eye movements Journal Article In: Accident Analysis and Prevention, vol. 99, pp. 110–113, 2017. @article{Clark2017, Railway level crossing collisions have recently been linked to a size-speed illusion where larger objects such as trains appear to move slower than smaller objects such as cars. An explanation for this illusion has centred on observer eye movements – particularly in relation to the larger, longer train. A previous study (Clark et al., 2016) found participants tend to make initial fixations to locations around the visual centroid of a moving vehicle; however individual eye movement patterns tended to be either fixation-saccade-fixation type, or smooth pursuit. It is therefore unknown as to which type of eye movement contributes to the size-speed illusion. This study isolated fixation eye movements by requiring participants to view computer animated sequences in a laboratory setting, where a static fixation square was placed in the foreground at one of two locations on a train (front and centroid). Results showed that even with the square placed around the front location of a vehicle, participants still underestimated the speed of the train relative to the car and underestimation was greater when the square was placed around the visual centroid of the train. Our results verify that manipulation of eye movement behaviour can be effective in reducing the magnitude of the size-speed illusion and propose that interventions based on this manipulation should be designed and tested for effectiveness. |
Alasdair D. F. Clarke; Aoife Mahon; Alex Irvine; Amelia R. Hunt People are unable to recognize or report on their own eye movements Journal Article In: Quarterly Journal of Experimental Psychology, vol. 70, no. 11, pp. 2251–2270, 2017. @article{Clarke2017, Eye movements bring new information into our visual system. The selection of each fixation is the result of a complex interplay of image features, task goals, and biases in motor control and perception. To what extent are we aware of the selection of saccades and their consequences? Here we use a converging methods approach to answer this question in three diverse experiments. In Experiment 1, participants were directed to find a target in a scene by a verbal description of it. We then presented the path the eyes took together with those of another participant. Participants could only identify their own path when the comparison scanpath was searching for a different target. In Experiment 2, participants viewed a scene for three seconds and then named objects from the scene. When asked whether they had looked directly at a given object, participants' responses were primarily determined by whether or not the object had been named, and not by whether it had been fixated. In Experiment 3, participants executed saccades towards single targets and then viewed a replay of either the eye movement they had just executed or that of someone else. Participants were at chance to identify their own saccade, even when it contained under- and overshoot corrections. The consistent inability to report on one's own eye movements across experiments suggests that awareness of eye movements is extremely impoverished or altogether absent. This is surprising given that information about prior eye movements is clearly used during visual search, motor error correction, and learning. |
Ivar Adrianus H. Clemens; Luc P. J. Selen; Antonella Pomante; Paul R. MacNeilage; W. Pieter Medendorp Eye movements in darkness modulate self-motion perception Journal Article In: eNeuro, vol. 4, no. 1, pp. 1–12, 2017. @article{Clemens2017, During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first (n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment (n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation. |
Daniel R. Coates; Johan Wagemans; Bilge Sayim Diagnosing the periphery: Using the Rey-Osterrieth Complex Figure drawing test to characterize peripheral visual function Journal Article In: i-Perception, vol. 8, no. 3, pp. 1–20, 2017. @article{Coates2017, Peripheral vision is strongly limited by crowding, the deleterious influence of neighboring stimuli on target perception. Many quantitative aspects of this phenomenon have been characterized, but the specific nature of the perceptual degradation remains elusive. We utilized a drawing technique to probe the phenomenology of peripheral vision, using the Rey–Osterrieth Complex Figure, a standard neuropsychological clinical instrument. The figure was presented at 12° or 6° in the right visual field, with eye tracking to ensure that the figure was only presented when observers maintained stable fixation. Participants were asked to draw the figure with free viewing, capturing its peripheral appearance. A foveal condition was used to measure copying performance in direct view. To assess the drawings, two raters used standard scoring systems that evaluated feature positions, spatial distortions, and omission errors. Feature scores tended to decrease with increasing eccentricity, both within and between conditions... |
Andrew L. Cohen; Namyi Kang; Tanya L. Leise Multi-attribute, multi-alternative models of choice: Choice, reaction time, and process tracing Journal Article In: Cognitive Psychology, vol. 98, pp. 45–72, 2017. @article{Cohen2017, The first aim of this research is to compare computational models of multi-alternative, multi-attribute choice when attribute values are explicit. The choice predictions of utility (standard random utility & weighted valuation), heuristic (elimination-by-aspects, lexicographic, & maximum attribute value), and dynamic (multi-alternative decision field theory, MDFT, & a version of the multi-attribute linear ballistic accumulator, MLBA) models are contrasted on both preferential and risky choice data. Using both maximum likelihood and cross-validation fit measures on choice data, the utility and dynamic models are preferred over the heuristic models for risky choice, with a slight overall advantage for the MLBA for preferential choice. The response time predictions of these models (except the MDFT) are then tested. Although the MLBA accurately predicts response time distributions, it only weakly accounts for stimulus-level differences. The other models completely fail to account for stimulus-level differences. Process tracing measures, i.e., eye and mouse tracking, were also collected. None of the qualitative predictions of the models are completely supported by that data. These results suggest that the models may not appropriately represent the interaction of attention and preference formation. To overcome this potential shortcoming, the second aim of this research is to test preference-formation assumptions, independently of attention, by developing the models of attentional sampling (MAS) model family which incorporates the empirical gaze patterns into a sequential sampling framework. An MAS variant that includes attribute values, but only updates the currently viewed alternative and does not contrast values across alternatives, performs well in both experiments. Overall, the results support the dynamic models, but point to the need to incorporate a framework that more accurately reflects the relationship between attention and the preference-formation process. |
Merryn D. Constable; Stefanie I. Becker Right away: A late, right-lateralized category effect complements an early, left-lateralized category effect in visual search Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 5, pp. 1611–1619, 2017. @article{Constable2017, According to the Sapir–Whorf hypothesis, learned semantic categories can influence early perceptual processes. A central finding in support of this view is the lateralized category effect—namely, the finding that categorically different colors (e.g., blue and green hues) can be discriminated faster than colors within the same color category (e.g., differ- ent hues of green), especially when they are presented in the right visual field. Because the right visual field projects to the left hemisphere, this finding has been popularly couched in terms of the left-lateralization of language. However, other studies have reported bilateral category effects, which has led some researchers to question the linguistic origins of the effect. Here we examined the time course of lateralized and bilateral category effects in the classical visual search paradigm by means of eyetracking and RT distribution analyses. Our results show a bilateral category effect in the manual responses, which is combined ofan early, left-lateralized category effect and a later, right-lateralized category effect. The newly discovered late, right-lateralized category effect occurred only when observers had difficulty locating the target, indicating a specialization ofthe right hemisphere to find categorically different targets after an initial error. The finding that early and late stages of visual search show different lateralized category effects can explain a wide range ofpreviously discrepant findings. |
Uschi Cop; Nicolas Dirix; Denis Drieghe; Wouter Duyck Presenting GECO: An eyetracking corpus of monolingual and bilingual sentence reading Journal Article In: Behavior Research Methods, vol. 49, no. 2, pp. 602–615, 2017. @article{Cop2017, This article introduces GECO, the Ghent Eye-Tracking Corpus, a monolingual and bilingual corpus of the eyetracking data of participants reading a complete novel. English monolinguals and Dutch–English bilinguals read an entire novel, which was presented in paragraphs on the screen. The bilinguals read half of the novel in their first language, and the other half in their second language. In this article, we describe the distributions and descriptive statistics ofthe most important reading time measures for the two groups of participants. This large eyetracking corpus is perfectly suited for both exploratory purposes and more directed hypothesis test- ing, and it can guide the formulation of ideas and theories about naturalistic reading processes in a meaningful context. Most importantly, this corpus has the potential to evaluate the generalizability of monolingual and bilingual language theories and models to the reading oflong texts and narratives. The corpus is freely available at http://expsy.ugent.be/downloads/ geco. |
Uschi Cop; Nicolas Dirix; Eva Van Assche; Denis Drieghe; Wouter Duyck Reading a book in one or two languages? An eye movement study of cognate facilitation in L1 and L2 reading Journal Article In: Bilingualism: Language and Cognition, vol. 20, no. 4, pp. 747–769, 2017. @article{Cop2017a, This study examined how noun reading by bilinguals is influenced by orthographic similarity with their translation equivalents in another language. Eye movements of Dutch-English bilinguals reading an entire novel in L1 and L2 were analyzed. In L2, we found a facilitatory effect of orthographic overlap. Additional facilitation for identical cognates was found for later eye movement measures. This shows that the complex, semantic context of a novel does not eliminate cross-lingual activation in natural reading. In L1 we detected non-identical cognate facilitation for first fixation durations of longer nouns. Identical cognate facilitation was found on total reading times for high frequent nouns. This study is the first to show cognate facilitation in L1 reading of narrative text. This shows that even when reading a novel in the mother tongue, lexical access is not restricted to the target language. |
James E. Cane; Heather J. Ferguson; Ian A. Apperly Using perspective to resolve reference: The impact of cognitive load and motivation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 4, pp. 591–610, 2017. @article{Cane2017, Research has demonstrated a link between perspective taking and working memory. Here we used eye tracking to examine the time course with which working memory load (WML) influences perspective-taking ability in a referential communication task and how motivation to take another's perspective modulates these effects. In Experiment 1, where there was no reward or time pressure, listeners only showed evidence of incorporating perspective knowledge during integration of the target object but did not anticipate reference to this common ground object during the pretarget-noun period. WML did not affect this perspective use. In Experiment 2, where a reward for speed and accuracy was applied, listeners used perspective cues to disambiguate the target object from the competitor object from the earliest moments of processing (i.e., during the pretarget-noun period), but only under low load. Under high load, responses were comparable with the control condition, where both objects were in common ground. Furthermore, attempts to initiate perspective-relevant responses under high load led to impaired recall on the concurrent WML task, indicating that perspective-relevant responses were drawing on limited cognitive resources. These results show that when there is ambiguity, perspective cues guide rapid referential interpretation when there is sufficient motivation and sufficient cognitive resources. |
Etzel Cardeña; Barbara Nordhjem; David Marcusson-Clavertz; Kenneth Holmqvist The "hypnotic state" and eye movements: Less there than meets the eye? Journal Article In: PLoS ONE, vol. 12, no. 8, pp. e0182546, 2017. @article{Cardena2017, Responsiveness to hypnotic procedures has been related to unusual eye behaviors for centuries. Kallio and collaborators claimed recently that they had found a reliable index for "the hypnotic state" through eye-tracking methods. Whether or not hypnotic responding involves a special state of consciousness has been part of a contentious debate in the field, so the potential validity of their claim would constitute a landmark. However, their conclusion was based on 1 highly hypnotizable individual compared with 14 controls who were not measured on hypnotizability. We sought t o replicate their results with a sample screened for High (n = 16) or Low (n = 13) hypnotizability. We used a factorial 2 (high vs. low hypnotizability) x 2 (hypnosis vs. resting conditions) counterbalanced order design with these eye-tracking tasks: Fixation, Saccade, Optokinetic nystagmus (OKN), Smooth pursuit, and Antisaccade (the first three tasks has been used in Kallio et al.'s experiment). Highs reported being more deeply in hypnosis than Lows but only in the hypnotic condition, as expected. There were no significant main or interaction effects for the Fixation, OKN, or Smooth pursuit tasks. For the Saccade task both Highs and Lows had smaller saccades during hypnosis, and in the Antisaccade task both groups had slower Antisaccades during hypnosis. Although a couple of results suggest that a hypnotic condition may produce reduced eye motility, the lack of significant interactions (e.g., showing only Highs expressing a particular eye behavior during hypnosis) does not support the claim that eye behaviors (at least as measured with the techniques used) are an indicator of a "hypnotic state.” Our results do not preclude the possibility that in a more spontaneous or different setting the experience of being hypnotized might relate to specific eye behaviors. |
Christophe Carlei; David Framorando; Nicolas Burra; Dirk Kerzel Face processing is enhanced in the left and upper visual hemi-fields Journal Article In: Visual Cognition, vol. 25, no. 7-8, pp. 749–761, 2017. @article{Carlei2017, Asymmetries in face processing -2 -Abstract We tested whether two known hemi-field asymmetries would affect visual search with face stimuli. Holistic processing of spatial configurations is better in the left hemi-field, reflecting a right hemisphere specialization, and object recognition is better in the upper visual field, reflecting stronger projections into the ventral stream. Faces tap into holistic processing and object recognition at the same time, which predicts better performance in the left and upper hemi-field, respectively. In the first experiment, participants had to detect a face with a gaze direction different from the remaining faces. Participants were faster to respond when targets were presented in the left and upper hemi-field. The same pattern of results was observed when only the eye region was presented. In the second experiment, we turned the faces upside-down, which eliminated the typical spatial configuration of faces. The left hemi-field advantage disappeared, showing that it is related to holistic processing of faces, whereas the upper hemi-field advantage related to object recognition persisted. Finally, we made the search task easier by asking observers to search for a face with open among closed eyes or vice versa. The easy search task eliminated the need for complex object recognition and accordingly, the advantage of the upper visual field disappeared. Similarly, the left hemi-field advantage was attenuated. In sum, our findings show that both horizontal and vertical asymmetries affect search for faces and can be selectively suppressed by changing characteristics of the stimuli. |
Gareth Carrol; Kathy Conklin Cross language lexical priming extends to formulaic units: Evidence from eye-tracking suggests that this idea 'has legs' Journal Article In: Bilingualism: Language and Cognition, vol. 20, no. 2, pp. 299–317, 2017. @article{Carrol2017, Idiom priming effects (faster processing compared to novel phrases) are generally robust in native speakers but not non-native speakers. This leads to the question of how idioms and other multiword units are represented and accessed in a first (L1) and second language (L2). We address this by investigating the processing of translated Chinese idioms to determine whether known L1 combinations show idiom priming effects in non-native speakers when encountered in the L2. In two eye-tracking experiments we compared reading times for idioms vs. control phrases (Experiment 1) and for figurative vs. literal uses of idioms (Experiment 2). Native speakers of Chinese showed recognition of the L1 form in the L2, but figurative meanings were read more slowly than literal meanings, suggesting that the non-compositional nature of idioms makes them problematic in a non-native language. We discuss the results as they relate to crosslinguistic priming at the multiword level. |
Nathan Caruana; Peter Lissa; Genevieve McArthur Beliefs about human agency influence the neural processing of gaze during joint attention Journal Article In: Social Neuroscience, vol. 12, no. 2, pp. 194–206, 2017. @article{Caruana2017b, The current study measured adults' P350 and N170 ERPs while they interacted with a character in a virtual reality paradigm. Some participants believed the character was controlled by a human ("avatar" condition |
Nathan Caruana; Genevieve McArthur; Alexandra Woolgar; Jon Brock Detecting communicative intent in a computerised test of joint attention Journal Article In: PeerJ, vol. 5, pp. 1–16, 2017. @article{Caruana2017, The successful navigation of social interactions depends on a range of cognitive faculties—including the ability to achieve joint attention with others to share information and experiences. We investigated the influence that intention monitoring processes have on gaze-following response times during joint attention. We employed a virtual reality task in which 16 healthy adults engaged in a collaborative game with a virtual partner to locate a target in a visual array. In the Search task, the virtual partner was programmed to engage in non-communicative gaze shifts in search of the target, establish eye contact, and then display a communicative gaze shift to guide the participant to the target. In the NoSearch task, the virtual partner simply established eye contact and then made a single communicative gaze shift towards the target (i.e., there were no non-communicative gaze shifts in search of the target). Thus, only the Search task required participants to monitor their partner's communicative intent before responding to joint attention bids. We found that gaze following was significantly slower in the Search task than the NoSearch task. However, the same effect on response times was not observed when participants completed non-social control versions of the Search and NoSearch tasks, in which the avatar's gaze was replaced by arrow cues. These data demonstrate that the intention monitoring processes involved in differentiating communicative and non-communicative gaze shifts during the Search task had a measurable influence on subsequent joint attention behaviour. The empirical and methodological implications of these findings for the fields of autism and social neuroscience will be discussed. |
Nathan Caruana; Dean Spirou; Jon Brock Human agency beliefs influence behaviour during virtual social interactions Journal Article In: PeerJ, vol. 5, pp. 1–18, 2017. @article{Caruana2017a, In recent years, with the emergence of relatively inexpensive and accessible virtual reality technologies, it is now possible to deliver compelling and realistic simulations of human-to-human interaction. Neuroimaging studies have shown that, when participants believe they are interacting via a virtual interface with another human agent, they show different patterns of brain activity compared to when they know that their virtual partner is computer-controlled. The suggestion is that users adopt an ‘‘intentional stance'' by attributing mental states to their virtual partner. However, it remains unclear how beliefs in the agency of a virtual partner influence participants' behaviour and subjective experience ofthe interaction. We investigated this issue in the context ofa cooperative ‘‘joint attention'' game in which participants interacted via an eye tracker with a virtual onscreen partner, directing each other's eye gaze to different screen locations. Half of the participants were correctly informed that their partner was controlled by a computer algorithm (‘‘Computer'' condition). The other halfwere misled into believing that the virtual character was controlled by a second participant in another room (‘‘Human'' condition). Those in the ‘‘Human'' condition were slower to make eye contact with their partner and more likely to try and guide their partner before theyhad established mutual eye contact than participants in the ‘‘Computer'' condition. They also responded more rapidly when their partner was guiding them, although the same effect was also found for a control condition in which they responded to an arrow cue. Results confirm the influence ofhuman agency beliefs on behaviour in this virtual social interaction context. They further suggest that researchers and developers attempting to simulate social interactions should consider the impact of agency beliefs on user experience in other social contexts, and their effect on the achievement of the application's goals. |
Mo Chen; Yuan-Zheng Wang; Chen-Chen Ma; Qi-Ze Li; Han Zhou; Jie Fu; Qian-Qian Yang; Yong-Mei Zhang; Yu Liu; Jun-Li Cao Empathy skill-dependent modulation of working memory by painful scene Journal Article In: Scientific Reports, vol. 7, pp. 4527, 2017. @article{Chen2017e, As an important online information retaining and processing function, working memory plays critical roles in many other cognitive functions. Several long-term factors, such as age, addiction and diseases, have been affirmed to impair working memory, but whether or how the short-term factors, like painful stimuli or emotions, regulate the human working memory ability is not well explored. Here we investigated the influences of empathic pain on upcoming working memory and existing working memory, by presenting human subjects with the pictures depicting painful or neutral scene. After separating the subjects into two groups, the more empathic group and relatively indifferent group, according to a well-accepted questionnaire (the Interpersonal Reactivity Index (IRI)), the modulatory effect emerged. Empathic pain might exerted either a facilitating effect or an impairing effect, which was closely correlated with the personal empathy skills. Meanwhile, different aspects of subjects' empathy traits exerted distinct effects, and female subjects were more vulnerable than male subjects. Present study reveals a new modulatory manner of the working memory, via empathy skill-dependent painful experience. |
Nigel T. M. Chen; Julian Basanovic; Lies Notebaert; Colin MacLeod; Patrick J. F. Clarke Attentional bias mediates the effect of neurostimulation on emotional vulnerability Journal Article In: Journal of Psychiatric Research, vol. 93, pp. 12–19, 2017. @article{Chen2017b, Transcranial direct current stimulation (tDCS) is a neuromodulatory technique which has garnered recent interest in the potential treatment for emotion-based psychopathology. While accumulating evidence suggests that tDCS may attenuate emotional vulnerability, critically, little is known about underlying mechanisms of this effect. The present study sought to clarify this by examining the possibility that tDCS may affect emotional vulnerability via its capacity to modulate attentional bias towards threatening information. Fifty healthy participants were randomly assigned to receive either anodal tDCS (2 mA/min) stimulation to the left dorsolateral prefrontal cortex (DLPFC), or sham. Participants were then eye tracked during a dual-video stressor task designed to elicit emotional reactivity, while providing a concurrent in-vivo measure of attentional bias. Greater attentional bias towards threatening information was associated with greater emotional reactivity to the stressor task. Furthermore, the active tDCS group showed reduced attentional bias to threat, compared to the sham group. Importantly, attentional bias was found to statistically mediate the effect of tDCS on emotional reactivity, while no direct effect of tDCS on emotional reactivity was observed. The findings are consistent with the notion that the effect of tDCS on emotional vulnerability may be mediated by changes in attentional bias, holding implications for the application of tDCS in emotion-based psychopathology. The findings also highlight the utility of in-vivo eye tracking measures in the examination of the mechanisms associated with DLPFC neuromodulation in emotional vulnerability. |
Qingrong Chen; Xin Huang; Le Bai; Xiaodong Xu; Yiming Yang; Michael K. Tanenhaus The effect of contextual diversity on eye movements in Chinese sentence reading Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 2, pp. 510–518, 2017. @article{Chen2017f, Recent studies have demonstrated that when contextual diversity is controlled token word frequency has minimal effects on visual word recognition. With the exception of a single experiment by Plummer, Perea, & Rayner (2014, Journal of Experimental Psychology: Learning, Memory, and Cognition, 40, 275-283), those studies have examined words in isolation. The current studies address two potential limitations of the Plummer et al. experiment. First, because Plummer et al. used different sentence frames for words in different conditions, the effects might be due to uncontrolled differences on the sentences. Second, the absence of a frequency effect might be attributed to comparing higher and lower frequency words within a limited range. Three eye-tracking experiments examined effects of contextual diversity and frequency on Mandarin Chinese, a logographic language, for words embedded in the normal sentences. In Experiment 1, yoked words were rotated through the same sentence frame. Experiments 2a and 2b used a design similar to Plummer et al., which allows use of a larger sample of words to compare results between experiments with a smaller and larger difference in log frequency (0.41 and 1.06, respectively). In all three experiments, first-pass and later eye movement measures were significantly shorter for targets with higher contextual diversity than for targets with lower contextual diversity, with no effects of frequency. |
Zhuohao Chen; Jinchen Du; Min Xiang; Yan Zhang; Shuyue Zhang Social exclusion leads to attentional bias to emotional social information: Evidence from eye movement Journal Article In: PLoS ONE, vol. 12, no. 10, pp. e0186313, 2017. @article{Chen2017d, Social exclusion has many effects on individuals, including the increased need to belong and elevated sensitivity to social information. Using a self-reporting method, and an eye-tracking technique, this study explored people's need to belong and attentional bias towards the socio-emotional information (pictures of positive and negative facial expressions compared to those of emotionally-neutral expressions) after experiencing a brief episode of social exclusion. We found that: (1) socially-excluded individuals reported higher negative emotions, lower positive emotions, and stronger need to belong than those who were not socially excluded; (2) compared to a control condition, social exclusion caused a longer response time to probe dots after viewing positive or negative face images; (3) social exclusion resulted in a higher frequency ratio of first attentional fixation on both positive and negative emotional facial pictures (but not on the neutral pictures) than the control condition; (4) in the social exclusion condition, participants showed shorter first fixation latency and longer first fixation duration to positive pictures than neutral ones but this effect was not observed for negative pictures; (5) participants who experienced social exclusion also showed longer gazing duration on the positive pictures than those who did not; although group differences also existed for the negative pictures, the gaze duration bias from both groups showed no difference from chance. This study demonstrated the emotional response to social exclusion as well as characterising multiple eye-movement indicators of attentional bias after experiencing social exclusion. |
Hui-Yan Chiau; Neil G. Muggleton; Chi-Hung Juan Exploring the contributions of the supplementary eye field to subliminal inhibition using double-pulse transcranial magnetic stimulation Journal Article In: Human Brain Mapping, vol. 38, pp. 339–351, 2017. @article{Chiau2017, It is widely accepted that the supplementary eye fields (SEF) are involved in the control of voluntary eye movements. However, recent evidence suggests that SEF may also be important for unconscious and involuntary motor processes. Indeed, Sumner et al. ([2007]: Neuron 54:697-711) showed that patients with micro-lesions of the SEF demonstrated an absence of subliminal inhibition as evoked by masked-prime stimuli. Here, we used double-pulse transcranial magnetic stimulation (TMS) in healthy volunteers to investigate the role of SEF in subliminal priming. We applied double-pulse TMS at two time windows in a masked-prime task: the first during an early phase, 20-70 ms after the onset of the mask but before target presentation, during which subliminal inhibition is present; and the second during a late phase, 20-70 ms after target onset, during which the saccade is being prepared. We found no effect of TMS with the early time window of stimulation, whereas a reduction in the benefit of an incompatible subliminal prime stimulus was found when SEF TMS was applied at the late time window. These findings suggest that there is a role for SEF related to the effects of subliminal primes on eye movements, but the results do not support a role in inhibiting the primed tendency. |
Lillian Chien; Rong Liu; Christopher Girkin; Miyoung Kwon Higher contrast requirement for letter recognition and macular RGC+ layer thinning in glaucoma patients and older adults Journal Article In: Investigative Ophthalmology & Visual Science, vol. 58, no. 14, pp. 6221–6231, 2017. @article{Chien2017, Purpose: Growing evidence suggests the involvement of the macula even in early stages of glaucoma. However, little is known about the impact of glaucomatous macular damage on central pattern vision. Here we examine the contrast requirement for letter recognition and its relationship with retinal thickness in the macular region. Methods: A total of 40 participants were recruited: 13 patients with glaucoma (mean age = 65.6 +/- 6.6 years), 14 age-similar normally sighted adults (59.1 +/- 9.1 years), and 13 young normally sighted adults (21.0 +/- 2.0 years). For each participant, letter-recognition contrast thresholds were obtained using a letter recognition task in which participants identified English letters presented at varying retinal locations across the central 12 degrees visual field, including the fovea. The macular retinal ganglion cell plus inner plexiform (RGC+) layer thickness was also evaluated using spectral-domain optical coherence tomography (SD-OCT). Results: Compared to age-similar normal controls, glaucoma patients exhibited a significant increase in letter-recognition contrast thresholds (by 236%, P < 0.001) and a significant decrease in RGC+ layer thickness (by 17%, P < 0.001) even after controlling for age, pupil diameter, and visual acuity. Compared to normal young adults, older adults showed a significant increase in letter-recognition contrast thresholds and a significant decrease in RGC+ layer thickness. Across all subjects, the thickness of macular RGC+ layer was significantly correlated with letter-recognition contrast thresholds, even after correcting for pupil diameter and visual acuity (r = -0.65, P < 0.001). Conclusions: Our results show that both glaucoma and normal aging likely bring about a thinning of the macular RGC+ layer; the macular RGC+ layer thickness appears to be associated with the contrast requirements for letter recognition in central vision. |
Kyoung Whan Choe; Omid Kardan; Hiroki P. Kotabe; John M. Henderson; Marc G. Berman To search or to like: Mapping fixations to differentiate two forms of incidental scene memory Journal Article In: Journal of Vision, vol. 17, no. 12, pp. 1–22, 2017. @article{Choe2017, We employed eye-tracking to investigate how performing different tasks on scenes (e.g., intentionally memorizing them, searching for an object, evaluating aesthetic preference) can affect eye movements during encoding and subsequent scene memory.We found that scene memorability decreased after visual search (one incidental encoding task) compared to intentional memorization, and that preference evaluation (another incidental encoding task) produced better memory, similar to the incidental memory boost previously observed for words and faces. By analyzing fixation maps, we found that although fixation map similarity could explain how eye movements during visual search impairs incidental scene memory, it could not explain the incidental memory boost from aesthetic preference evaluation, implying that implicit mechanisms were at play. We conclude that not all incidental encoding tasks should be taken to be similar, as different mechanisms (e.g., explicit or implicit) lead to memory enhancements or decrements for different incidental encoding tasks. |
Wonil Choi; Matthew W. Lowder; Fernanda Ferreira; Tamara Y. Swaab; John M. Henderson Effects of word predictability and preview lexicality on eye movements during reading: A comparison between young and older adults Journal Article In: Psychology and Aging, vol. 32, no. 3, pp. 232–242, 2017. @article{Choi2017, Previous eye-tracking research has characterized older adults' reading patterns as "risky," arguing that compared to young adults, older adults skip more words, have longer saccades, and are more likely to regress to previous portions of the text. In the present eye-tracking study, we reexamined the claim that older adults adopt a risky reading strategy, utilizing the boundary paradigm to manipulate parafoveal preview and contextual predictability of a target word. Results showed that older adults had longer fixation durations compared to young adults; however, there were no age differences in skipping rates, saccade length, or proportion of regressions. In addition, readers showed higher skipping rates of the target word if the preview string was a word than if it was a nonword, regardless of age. Finally, the effect of predictability in reading times on the target word was larger for older adults than for young adults. These results suggest that older adults' reading strategies are not as risky as was previously claimed. Instead, we propose that older adults can effectively combine top-down information from the sentence context with bottom-up information from the parafovea to optimize their reading strategies. |
Michael Christen; Mathias Abegg The effect of magnification and contrast on reading performance in different types of simulated low vision Journal Article In: Journal of Eye Movement Research, vol. 10, no. 2, pp. 1–9, 2017. @article{Christen2017, Low vision therapy, such as magnifiers or contrast enhancement, is widely used. Scientific evidence proving its efficacy is scarce however. The objective of this study was to investigate whether the benefits of magnification and contrast enhancement depended on the origin of low vision. For this purpose we measured reading speed with artificially induced low vision in 12 healthy subjects in conditions of a simulated central scotoma, blurred vision and oscillopsia. Texts were either blurred, set in motion or blanked at the gaze position by using eye tracking and gaze contingent display. The simulated visual impairment was calibrated such that all types of low vision caused equal reading impairment. We then tested the effect of magnification and contrast enhancement among the different types of low vision. We found that reading speed improved with increasing magnification and with higher contrast in all conditions. The effect of magnification was significantly different in the three low vision conditions: The gain from magnification was highest in simulated blur and least in central scotoma. Magnification eventually led to near normal reading speed in all conditions. High contrast was less effective than high magnification and the effect of contrast enhancement was similar in all low vision conditions. From these results we conclude that the type of low vision determines the benefit that can be expected from magnification. Contrast enhancement leads to similar improved reading speed in all low vision types. We provide evidence that supports the use of low vision aids. |
Kiel Christianson; Steven G. Luke; Erika K. Hussey; Kacey L. Wochna Why reread? Evidence from garden-path and local coherence structures Journal Article In: Quarterly Journal of Experimental Psychology, vol. 70, no. 7, pp. 1380–1405, 2017. @article{Christianson2017a, Two eye-tracking experiments were conducted to compare the online reading and offline comprehension of main verb/reduced relative garden-path sentences and local coherence sentences. Rereading of early material in garden-path reduced relatives should be revisionary, aimed at reanalysing an earlier misparse; however, rereading of early material in a local coherence reduced relative need only be confirmatory, as the original parse of the earlier portion of these sentences is ultimately correct. Results of online and offline measures showed that local coherence structures elicited signals of reading disruption that arose earlier and lasted longer, and local coherence comprehension was also better than garden path comprehension. Few rereading measures in either sentence type were predicted by structural features of these sentences, nor was rereading related to comprehension accuracy, which was extremely low overall. Results are discussed with respect to selective reanalysis and good-enough processing. |
Kiel Christianson; Peiyun Zhou; Cassie Palmer; Adina Raizen Effects of context and individual differences on the processing of taboo words Journal Article In: Acta Psychologica, vol. 178, pp. 73–86, 2017. @article{Christianson2017, Previous studies suggest that taboo words are special in regards to language processing. Findings from the studies have led to the formation of two theories, global resource theory and binding theory, of taboo word processing. The current study investigates how readers process taboo words embedded in sentences during silent reading. In two experiments, measures collected include eye movement data, accuracy and reaction time measures for recalling probe words within the sentences, and individual differences in likelihood of being offended by taboo words. Although certain aspects of the results support both theories, as the likelihood of a person being offended by a taboo word influenced some measures, neither theory sufficiently predicts or describes the effects observed. The results are interpreted as evidence that processing effects ascribed to taboo words are largely, but not completely, attributable to the context in which they are used and the individual attitudes of the people who hear/read them. The results also demonstrate the importance of investigating taboo words in naturalistic language processing paradigms. A revised theory of taboo word processing is proposed that incorporates both global resource theory and binding theory along with the sociolinguistic factors and individual differences that largely drive the effects observed here. |
Antonios I. Christou; Yvonne Wallis; Hayley Bair; Maurice Zeegers; Joseph P. McCleery Serotonin 5-HTTLPR genotype modulates reactive visual scanning of social and non-social affective stimuli in young children Journal Article In: Frontiers in Behavioral Neuroscience, vol. 11, pp. 118, 2017. @article{Christou2017, Previous studies have documented the 5-HTTLPR polymorphisms as genetic variants that are involved in serotonin availability and also associated with emotion regulation and facial emotion processing. In particular, neuroimaging and behavioral studies of healthy populations have produced evidence to suggest that carriers of the Short allele exhibit heightened neurophysiological and behavioral reactivity when processing aversive stimuli, particularly in brain regions involved in fear. However, an additional distinction has emerged in the field, which highlights particular types of fearful information, i.e., aversive information which involves a social component versus non-social aversive stimuli. Although processing of each of these stimulus types (social and non-social) is believed to involve a subcortical neural system which includes the amygdala, evidence also suggests that the amygdala itself may be particularly responsive to socially significant environmental information, potentially due to the critical relevance of social information for humans. Examining individual differences in neurotransmitter systems which operate within this subcortical network, and in particular the serotonin system, may be critically informative for furthering our understanding of the neurobiological mechanisms underlying responses to emotional and affective stimuli. In the present study we examine visual scanning patterns in response to both aversive and positive images of a social or non-social nature in relation to 5-HTTLPR genotypes, in 49 children aged 4-7 years. Results indicate that children with at least one Short 5-HTTLPR allele spent less time fixating the threat-related non-social stimuli, compared with participants with two copies of the Long allele. Interestingly, a separate set of analyses suggests that carriers of two copies of the short 5-HTTLPR allele also spent less time fixating both the negative and positive non-social stimuli. Together, these findings support the hypothesis that genetically mediated differences in serotonin availability mediate behavioral responses to different types of emotional stimuli in young children. |
Tim Chuk; Antoni B. Chan; Janet H. Hsiao In: Vision Research, vol. 141, pp. 204–216, 2017. @article{Chuk2017a, The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. |
Tim Chuk; Kate Crookes; William G. Hayward; Antoni B. Chan; Janet H. Hsiao Hidden Markov model analysis reveals the advantage of analytic eye movement patterns in face recognition across cultures Journal Article In: Cognition, vol. 169, pp. 102–117, 2017. @article{Chuk2017, It remains controversial whether culture modulates eye movement behavior in face recognition. Inconsistent results have been reported regarding whether cultural differences in eye movement patterns exist, whether these differences affect recognition performance, and whether participants use similar eye movement patterns when viewing faces from different ethnicities. These inconsistencies may be due to substantial individual differences in eye movement patterns within a cultural group. Here we addressed this issue by conducting individual-level eye movement data analysis using hidden Markov models (HMMs). Each individual's eye movements were modeled with an HMM. We clustered the individual HMMs according to their similarities and discovered three common patterns in both Asian and Caucasian participants: holistic (looking mostly at the face center), left-eye-biased analytic (looking mostly at the two individual eyes in addition to the face center with a slight bias to the left eye), and right-eye-based analytic (looking mostly at the right eye in addition to the face center). The frequency of participants adopting the three patterns did not differ significantly between Asians and Caucasians, suggesting little modulation from culture. Significantly more participants (75%) showed similar eye movement patterns when viewing own- and other-race faces than different patterns. Most importantly, participants with left-eye-biased analytic patterns performed significantly better than those using either holistic or right-eye-biased analytic patterns. These results suggest that active retrieval of facial feature information through an analytic eye movement pattern may be optimal for face recognition regardless of culture. |
Sven Hohenstein; Hannes Matuschek; Reinhold Kliegl Linked linear mixed models: A joint analysis of fixation locations and fixation durations in natural reading Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 3, pp. 637–651, 2017. @article{Hohenstein2017, The complexity of eye-movement control during reading allows measurement of many dependent variables, the most prominent ones being fixation durations and their locations in words. In current practice, either variable may serve as dependent variable or covariate for the other in linear mixed models (LMMs) featuring also psycholinguistic covariates of word recognition and sentence comprehension. Rather than analyzing fixation location and duration with separate LMMs, we propose linking the two according to their sequential dependency. Specifically, we include predicted fixation location (estimated in the first LMM from psycholinguistic covariates) and its associated residual fixation location as covariates in the second, fixation-duration LMM. This linked LMM affords a distinction between direct and indirect effects (mediated through fixation location) of psycholinguistic covariates on fixation durations. Results confirm the robustness of distributed processing in the perceptual span. They also offer a resolution of the paradox of the inverted optimal viewing position (IOVP) effect (i.e., longer fixation durations in the center than at the beginning and end of words) although the opposite (i.e., an OVP effect) is predicted from default assumptions of psycholinguistic processing efficiency: The IOVP effect in fixation durations is due to the residual fixation-location covariate, presumably driven primarily by saccadic error, and the OVP effect (at least the left part of it) is uncovered with the predicted fixation-location covariate, capturing the indirect effects of psycholinguistic covariates. We expect that linked LMMs will be useful for the analysis of other dynamically related multiple outcomes, a conundrum of most psychonomic research. |
Linus Holm; Olympia Karampela; Fredrik Ullén; Guy Madison Executive control and working memory are involved in sub-second repetitive motor timing Journal Article In: Experimental Brain Research, vol. 235, no. 3, pp. 787–798, 2017. @article{Holm2017, The nature of the relationship between timing and cognition remains poorly understood. Cognitive control is known to be involved in discrete timing tasks involving durations above 1 s, but has not yet been demonstrated for repetitive motor timing below 1 s. We examined the latter in two continuation tapping experiments, by varying the cognitive load in a concurrent task. In Experiment 1, participants repeated a fixed three finger sequence (low executive load) or a pseudorandom sequence (high load) with either 524-, 733-, 1024- or 1431-ms inter-onset intervals (IOIs). High load increased timing variability for 524 and 733-ms IOIs but not for the longer IOIs. Experiment 2 attempted to replicate this finding for a concurrent memory task. Participants retained three letters (low working memory load) or seven letters (high load) while producing intervals (524- and 733-ms IOIs) with a drum stick. High load increased timing variability for both IOIs. Taken together, the experiments demonstrate that cognitive control processes influence sub-second repetitive motor timing. |
Gernot Horstmann; Stefanie I. Becker; Daniel Ernst Dwelling, rescanning, and skipping of distractors explain search efficiency in difficult search better than guidance by the target Journal Article In: Visual Cognition, vol. 25, no. 1-3, pp. 291–305, 2017. @article{Horstmann2017, Prominent models of overt and covert visual search focus on explaining search efficiency by visual guidance. That some searches are fast whereas others are slow is explained by the ability of the target to guide attention to the target's position. Comparably little attention is given to other variables that might also influence search efficiency, such as dwelling on distractors, skipping distractors, and revisiting distractors. Here, we examine the relative contributions of dwelling, skipping, rescanning, and the use of visual guidance, in explaining visual search times in general, and the similarity effect in particular. The hallmark of the similarity effect is more efficient search for a target that is dissimilar to the distractors compared to a target that is similar to the distractors. In the present experiment, participants have to find an emotional face target among nine neutral face non-targets. In different blocks, the target is either more or less similar to the non-targets. Eye-tracking is used to separately measure selection latency, dwelling on distractors, and skipping and revisiting of distractors. As expected, visual search times show a large similarity effect. Similarity also has strong effects on dwelling, skipping, and revisiting, but only weak effects on visual guidance. Regression analyses show that dwelling, skipping, and revisiting determine search times on trial level. The influence of dwelling and revisiting is stronger in target absent than in target present trials, whereas the opposite is true for skipping. The similarity effect is best explained by dwelling. Additionally, including a measure of guidance does not yield substantial benefits. In sum, results indicate that guidance by the target is not the sole principle behind fast search; rather, distractors are less often skipped, more often visited, and longer dwelled on in slow search conditions. |
Jaakko Hotta; Jukka Saari; Miika Koskinen; Yevhen Hlushchuk; Nina Forss; Riitta Hari Abnormal brain responses to action observation in complex regional pain syndrome Journal Article In: Journal of Pain, vol. 18, no. 3, pp. 255–265, 2017. @article{Hotta2017, Patients with complex regional pain syndrome (CRPS) display various abnormalities in central motor function, and their pain is intensified when they perform or just observe motor actions. In this study, we examined the abnormalities of brain responses to action observation in CRPS. We analyzed 3-T functional magnetic resonance images from 13 upper limb CRPS patients (all female, ages 31–58 years) and 13 healthy, age- and sex-matched control subjects. The functional magnetic resonance imaging data were acquired while the subjects viewed brief videos of hand actions shown in the first-person perspective. A pattern-classification analysis was applied to characterize brain areas where the activation pattern differed between CRPS patients and healthy subjects. Brain areas with statistically significant group differences (q < .05, false discovery rate-corrected) included the hand representation area in the sensorimotor cortex, inferior frontal gyrus, secondary somatosensory cortex, inferior parietal lobule, orbitofrontal cortex, and thalamus. Our findings indicate that CRPS impairs action observation by affecting brain areas related to pain processing and motor control. Perspective This article shows that in CRPS, the observation of others' motor actions induces abnormal neural activity in brain areas essential for sensorimotor functions and pain. These results build the cerebral basis for action-observation impairments in CRPS. |
Michael C. Hout; Arryn Robbins; Hayward J. Godwin; Gemma Fitzsimmons; Collin Scarince Categorical templates are more useful when features are consistent: Evidence from eye movements during search for societally important vehicles Journal Article In: Attention, Perception, and Psychophysics, vol. 79, pp. 1578–1592, 2017. @article{Hout2017, Unlike in laboratory visual search tasks—wherein participants are typically presented with a pictorial represen- tation of the item they are asked to seek out—in real-world searches, the observer rarely has veridical knowledge of the visual features that define their target. During categorical search, observers look for any instance of a categorically de- fined target (e.g., helping a family member look for their mo- bile phone). In these circumstances, people may not have in- formation about noncritical features (e.g., the phone'scolor), and must instead create a broad mental representation using the features that define (or are typical of) the category of ob- jects they are seeking out (e.g., modern phones are typically rectangular and thin). In the current investigation (Experiment 1), using a categorical visual search task, we add to the body ofevidence suggesting that categorical templates are effective enough to conduct efficient visual searches. When color in- formation was available (Experiment 1a), attentional guid- ance, attention restriction, and object identification were en- hanced when participants looked for categories with consis- tent features (e.g., ambulances) relative to categories with more variable features (e.g., sedans). When color information was removed (Experiment 1b), attention benefits disappeared, but object recognition was still better for feature-consistent target categories. In Experiment 2, we empirically validated the relative homogeneity of our societally important vehicle stimuli. Taken together, our results are in line with a category-consistent view of categorical target templates (Yu, Maxfield, & Zelinsky in, Psychological Science, 2016. doi:10.1177/ 0956797616640237), and suggest that when features of a category are consistent and predictable, searchers can create mental representations that allow for the efficient guidance and restriction ofattention as well as swift object identification. |
Philippa L. Howard; Simon P. Liversedge; Valerie Benson Processing of co-reference in autism spectrum disorder Journal Article In: Autism Research, vol. 10, no. 12, pp. 1968–1980, 2017. @article{Howard2017, Accuracy for reading comprehension and inferencing tasks has previously been reported as reduced for individuals with autism spectrum disorder (ASD), relative to typically developing (TD) controls. In this study, we used an eye movements and reading paradigm to examine whether this difference in performance accuracy is underpinned by differences in the inferential work required to compute a co-referential link. Participants read two sentences that contained a category noun (e.g., bird) that was preceded by and co-referred to an exemplar that was either typical (e.g., pigeon) or atypical (e.g., penguin). Both TD and ASD participants showed an effect of typicality for gaze durations upon the category noun, with longer times being observed when the exemplar was atypical, in comparison to typical. No group differences or interactions were detected for target processing, and verbal language proficiency was found to predict general reading and inferential skill. The only difference between groups was that individuals with ASD engaged in more re-reading than TD participants. These data suggest that readers with ASD do not differ in the efficiency with which they compute anaphoric links on-line during reading. |
Philippa L. Howard; Simon P. Liversedge; Valerie Benson Investigating the use of world knowledge during on-line comprehension in adults with Autism Spectrum Disorder Journal Article In: Journal of Autism and Developmental Disorders, vol. 47, no. 7, pp. 2039–2053, 2017. @article{Howard2017a, The on-line use of world knowledge during reading was examined in adults with autism spectrum disorder (ASD). Both ASD and typically developed adults read sentences that included plausible, implausible and anomalous thematic relations, as their eye movements were monitored. No group differences in the speed of detection of the anomalous violations were found, but the ASD group showed a delay in detection of implausible thematic relations. These findings suggest that there are subtle differences in the speed of world knowledge processing during reading in ASD. |
Philippa L. Howard; Simon P. Liversedge; Valerie Benson Benchmark eye movement effects during natural reading in autism spectrum disorder Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 1, pp. 109–127, 2017. @article{Howard2017b, In 2 experiments, eye tracking methodology was used to assess on-line lexical, syntactic and semantic processing in autism spectrum disorder (ASD). In Experiment 1, lexical identification was examined by manipulating the frequency of target words. Both typically developed (TD) and ASD readers showed normal frequency effects, suggesting that the processes TD and ASD readers engage in to identify words are comparable. In Experiment 2, syntactic parsing and semantic interpretation requiring the on-line use of world knowledge were examined, by having participants read garden path sentences containing an ambiguous prepositional phrase. Both groups showed normal garden path effects when reading low-attached sentences and the time course of reading disruption was comparable between groups. This suggests that not only do ASD readers hold similar syntactic preferences to TD readers, but also that they use world knowledge on-line during reading. Together, these experiments demonstrate that the initial construction of sentence interpretation appears to be intact in ASD. However, the finding that ASD readers skip target words less often in Experiment 2, and take longer to read sentences during second pass for both experiments, suggests that they adopt a more cautious reading strategy and take longer to evaluate their sentence interpretation prior to making a manual response. |
Jing Huang; Karl R. Gegenfurtner; Alexander C. Schutz; Jutta Billino Age effects on saccadic adaptation: Evidence from different paradigms reveals specific vulnerabilities Journal Article In: Journal of Vision, vol. 17, no. 6, pp. 1–18, 2017. @article{Huang2017, Saccadic eye movements provide an opportunity to study closely interwoven perceptual, motor, and cognitive changes during aging. Here, we investigated age effects on different mechanisms of saccadic plasticity. We compared age effects in two different adaptation paradigms that tap into low- and high-level adaptation processes. A total of 27 senior adults and 25 young adults participated in our experiments. In our first experiment, we elicited adaptation by a double-step paradigm, which is designed to trigger primarily lowlevel, gradual motor adaptation. Age groups showed equivalent adaptation of saccadic gain. In our second experiment, adaptation was induced by a perceptual task that emphasizes high-level, fast processes. We consistently found no evidence for age-related differences in low-level adaptation; however, the fast adaptation response was significantly more pronounced in the young adult group. We conclude that low-level motor adaptation is robust during healthy aging but that high-level contributions, presumably involving executive strategies, are subject to age-related decline. Our findings emphasize the need to differentiate between specific aging processes in order to understand functional decline and stability across the adult life span. |
Nicholas Huang; Mounya Elhilali Auditory salience using natural soundscapes Journal Article In: The Journal of the Acoustical Society of America, vol. 141, no. 3, pp. 2163–2176, 2017. @article{Huang2017a, Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience. |
Po Sheng Huang An exploratory study on remote associates problem solving: Evidence of eye movement indicators Journal Article In: Thinking Skills and Creativity, vol. 24, pp. 63–72, 2017. @article{Huang2017b, In recent years, remote associates problems have been widely used to measure creative processes. However, studies have rarely explored the processes involved in remote associates problem solving. The main purpose of this study was to record eye movements while participants solved twelve remote associates problems compiled by Huang (2014). The results show the following: (1) The mean fixation duration gradually increases throughout the problem-solving process, which indicates that more problem solvers encounter impasses over the course of problem solving. This result supports the “impasse encounter” phase of insight. (2) During the initial period of problem solving, individuals display more regression counts in the fixation region than in the key region, which supports the idea that the impasses are caused by inappropriate initial representation. (3) During the middle period of the problem-solving process, the time individuals spend gazing at the key region increases, while the time that they spend gazing at the fixation region decreases. This pattern supports the “impasse resolution and insight” phase of insight. Finally, we compare the differences in eye movement between insight and remote associates problem solving. |
Jason Hubbard; David Kuhns; Theo A. J. Schäfer; Ulrich Mayr Is conflict adaptation due to active regulation or passive carry-over? Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 3, pp. 385–393, 2017. @article{Hubbard2017, Conflict-adaptation effects (i.e., reduced response-time costs on high-conflict trials following high-conflict trials) supposedly represent our cognitive system's ability to regulate itself according to current processing demands. However, currently it is not clear whether these effects reflect conflict-triggered, active regulation, or passive carry-over of previous-trial control settings. We used eye movements to examine whether the degree of experienced conflict modulates conflict-adaptation effects, as the conflict-triggered regulation view predicts. Across 2 experiments in which participants had to identify a target stimulus based on an endogenous cue while—on conflict trials—having to resist a sudden-onset distractor, we found a clear indication of conflict adaptation. This adaptation effect disappeared however, when participants inadvertently fixated the sudden-onset distractor on the previous trial—that is, when they experienced a high degree of conflict. This pattern of results suggests that conflict adaptation can be explained parsimoniously in terms of a broader memory process that retains recently adopted control settings across trials. |
C. Hübner; Alexander C. Schütz Numerosity estimation benefits from transsaccadic information integration Journal Article In: Journal of Vision, vol. 17, no. 13, pp. 1–16, 2017. @article{Huebner2017, Humans achieve a stable and homogeneous representation of their visual environment, although visual processing varies across the visual field. Here we investigated the circumstances under which peripheral and foveal information is integrated for numerosity estimation across saccades. We asked our participants to judge the number of black and white dots on a screen. Information was presented either in the periphery before a saccade, in the fovea after a saccade, or in both areas consecutively to measure transsaccadic integration. In contrast to previous findings, we found an underestimation of numerosity for foveal presentation and an overestimation for peripheral presentation. We used a maximum-likelihood model to predict accuracy and reliability in the transsaccadic condition based on peripheral and foveal values. We found near-optimal integration of peripheral and foveal information, consistently with previous findings about orientation integration. In three consecutive experiments, we disrupted object continuity between the peripheral and foveal presentations to probe the limits of transsaccadic integration. Even for global changes on our numerosity stimuli, no influence of object discontinuity was observed. Overall, our results suggest that transsaccadic integration is a robust mechanism that also works for complex visual features such as numerosity and is operative despite internal or external mismatches between foveal and peripheral information. Transsaccadic integration facilitates an accurate and reliable perception of our environment. |
Anneline Huck; Robin L. Thompson; Madeline Cruice; Jane Marshall The influence of sense-contingent argument structure frequencies on ambiguity resolution in aphasia Journal Article In: Neuropsychologia, vol. 100, pp. 171–194, 2017. @article{Huck2017a, Verbs with multiple senses can show varying argument structure frequencies, depending on the underlying sense. When acknowledge is used to mean ‘recognise', it takes a direct object (DO), but when it is used to mean ‘admit' it prefers a sentence complement (SC). The purpose of this study was to investigate whether people with aphasia (PWA) can exploit such meaning-structure probabilities during the reading of temporarily ambiguous sentences, as demonstrated for neurologically healthy individuals (NHI) in a self-paced reading study (Hare et al., 2003). Eleven people with mild or moderate aphasia and eleven neurologically healthy control participants read sentences while their eyes were tracked. Using adapted materials from the study by Hare et al. target sentences containing an SC structure (e.g. He acknowledged (that) his friends would probably help him a lot) were presented following a context prime that biased either a direct object (DO-bias) or sentence complement (SC-bias) reading of the verbs. Half of the stimuli sentences did not contain that so made the post verbal noun phrase (his friends) structurally ambiguous. Both groups of participants were influenced by structural ambiguity as well as by the context bias, indicating that PWA can, like NHI, use their knowledge of a verb's sense-based argument structure frequency during online sentence reading. However, the individuals with aphasia showed delayed reading patterns and some individual differences in their sensitivity to context and ambiguity cues. These differences compared to the NHI may contribute to difficulties in sentence comprehension in aphasia. |
Anneline Huck; Robin L. Thompson; Madeline Cruice; Jane Marshall Effects of word frequency and contextual predictability on sentence reading in aphasia: An eye movement analysis Journal Article In: Aphasiology, vol. 31, no. 11, pp. 1307–1332, 2017. @article{Huck2017, Background: Mild reading difficulties are a pervasive symptom of aphasia. While much research in aphasia has been devoted to the study of single word reading, little is known about the process of (silent) sentence reading. Reading research in the non-brain-damaged population has benefited from the use of eye-tracking methodology, allowing inferences on cognitive processing without participants making an articulatory response. This body of research identified two factors, which strongly influence reading at the sentence level: word frequency and contextual predictability (influence of context).Aims: The main aim of this study was to investigate whether word frequency and contextual predictability influence sentence reading by people with aphasia (PWA), in parallel to that of neurologically healthy individuals (NHI). A second aim was to examine whether readers with aphasia show individual differences in the effects, and whether these are related to their underlying language profile. Methods & Procedures: Seventeen PWA and associated mild reading difficulties and 20 NHI took part in this study. Individuals with aphasia completed a range of language assessments. For the eye-tracking experiment, participants silently read sentences that included target words varying in word frequency and predictability while their eye movements were recorded. Comprehension accuracy, fixation durations, and the probability of first-pass fixations and first-pass regressions were measured. Outcomes & Results: Eye movements by both groups were significantly influenced by word frequency and predictability, but the predictability effect was stronger for the PWA than the neurologically healthy participants. Additionally, effects of word frequency and predictability were independent for the NHI, but the individuals with aphasia showed a more interactive pattern. Correlational analyses revealed (i) a significant relationship between lexical-semantic impairments and the word frequency effect score and (ii) a marginally significant association between the sentence comprehension skills and the predictability effect score. Conclusions: Consistent with compensatory processing theories, these findings indicate that decreased reading efficiency may trigger a more interactive reading strategy that aims to compensate for poorer reading by putting more emphasis on a sentence context, particularly for low-frequency words. For those individuals who have difficulties applying the strategy automatically, using a sentence context could be a beneficial strategy to focus on in reading intervention. |
Erika K. Hussey; J. Isaiah Harbison; Susan Teubner-Rhodes; Alan Mishler; Kayla Velnoskey; Jared M. Novick Memory and language improvements following cognitive control training Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 1, pp. 23–58, 2017. @article{Hussey2017, Cognitive control refers to adjusting thoughts and actions when confronted with conflict during information processing. We tested whether this ability is causally linked to performance on certain language and memory tasks by using cognitive control training to systematically modulate people's ability to resolve information-conflict across domains. Different groups of subjects trained on 1 of 3 minimally different versions of an n-back task: n-back-with-lures (High-Conflict), n-back-without-lures (Low- Conflict), or 3-back-without-lures (3-Back). Subjects completed a battery of recognition memory and language processing tasks that comprised both high- and low-conflict conditions before and after training. We compared the transfer profiles of (a) the High- versus Low-Conflict groups to test how conflict resolution training contributes to transfer effects, and (b) the 3-Back versus Low-Conflict groups to test for differences not involving cognitive control. High-Conflict training—but not Low-Conflict training— produced discernable benefits on several untrained transfer tasks, but only under selective conditions requiring cognitive control. This suggests that the conflict-focused intervention influenced functioning on ostensibly different outcome measures across memory and language domains. 3-Back training resulted in occasional improvements on the outcome measures, but these were not selective for conditions involving conflict resolution. We conclude that domain-general cognitive control mechanisms are plastic, at least temporarily, and may play a causal role in linguistic and nonlinguistic performance. |
John P. Hutson; Tim J. Smith; Joseph P. Magliano; Lester C. Loschky What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film Journal Article In: Cognitive Research: Principles and Implications, vol. 2, no. 46, pp. 1–30, 2017. @article{Hutson2017, Film is ubiquitous, but the processes that guide viewers' attention while viewing film narratives are poorly understood. In fact, many film theorists and practitioners disagree on whether the film stimulus (bottom-up) or the viewer (top-down) is more important in determining how we watch movies. Reading research has shown a strong connection between eye movements and comprehension, and scene perception studies have shown strong effects of viewing tasks on eye movements, but such idiosyncratic top-down control of gaze in film would be anathema to the universal control mainstream filmmakers typically aim for. Thus, in two experiments we tested whether the eye movements and comprehension relationship similarly held in a classic film example, the famous opening scene of Orson Welles' Touch of Evil (Welles & Zugsmith, Touch of Evil, 1958). Comprehension differences were compared with more volitionally controlled task-based effects on eye movements. To investigate the effects of comprehension on eye movements during film viewing, we manipulated viewers' comprehension by starting participants at different points in a film, and then tracked their eyes. Overall, the manipulation created large differences in comprehension, but only produced modest differences in eye movements. To amplify top-down effects on eye movements, a task manipulation was designed to prioritize peripheral scene features: a map task. This task manipulation created large differences in eye movements when compared to participants freely viewing the clip for comprehension. Thus, to allow for strong, volitional top-down control of eye movements in film, task manipulations need to make features that are important to narrative comprehension irrelevant to the viewing task. The evidence provided by this experimental case study suggests that filmmakers' belief in their ability to create systematic gaze behavior across viewers is confirmed, but that this does not indicate universally similar comprehension of the film narrative. |
Duong Huynh; Srimant P. Tripathy; Harold E. Bedell; Haluk Öğmen The reference frame for encoding and retention of motion depends on stimulus set size Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 3, pp. 888–910, 2017. @article{Huynh2017, The goal of this study was to investigate the reference frames used in perceptual encoding and storage ofvisual motion information. In our experiments, observers viewed multiple moving objects and reported the direction ofmotion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding ofmotion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case ofcomplex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding ofmotion information. |
Shah Khalid; Gernot Horstmann; Thomas Ditye; Ulrich Ansorge In: Psychological Research, vol. 81, no. 2, pp. 508–523, 2017. @article{Khalid2017, In the current study, we tested whether a fear advantage—rapid attraction of attention to fearful faces that is more stimulus-driven than to neutral faces—is emotion specific. We used a cueing task with face cues preceding targets. Cues were non-predictive of the target locations. In two experiments, we found enhanced cueing of saccades towards the targets with fearful face cues than with neutral face cues: Saccades towards targets were more efficient with cues and targets at the same position (under valid conditions) than at opposite positions (under invalid conditions), and this cueing effect was stronger with fearful than with neutral face cues. In addition, this cueing effect difference between fearful and neutral faces was absent with inverted faces as cues, indicating that the fear advantage is face-specific. We also show that emotion categorization of the face cues mirrored these effects: Participants were better at categorizing face cues as fearful or neutral with upright than with inverted faces (Experiment 1). Finally, in alternative blocks including disgusted faces instead of fearful faces, we found more similar cueing effects with disgusted faces and neutral faces, and with upright and inverted faces (Experiment 2). Jointly, these results demonstrate that the fear advantage is emotion-specific. Results are discussed in light of evolutionary explanations of the fear advantage. |
Azizuddin Khan; Otto Loberg; Jarkko Hautala On the eye movement control of changing reading direction for a single word: The case of reading numerals in Urdu Journal Article In: Journal of Psycholinguistic Research, vol. 46, no. 5, pp. 1273–1283, 2017. @article{Khan2017, Typically orthographies are consistent in terms of reading direction, i.e. from left-to-right or right-to-left. However, some are bidirectional, i.e., certain parts of the text, (such as numerals in Urdu), are read against the default reading direction. Such sudden changes in reading direction may challenge the reader in many ways, at the level of planning of saccadic eye movements, changing the direction of attention, word recognition processes and cognitive reading strategies. The present study attempts to understand how readers achieve such sudden changes in reading direction at the level of eye movements and conscious cognitive reading strategies. Urdu readers reported employing a two-stage strategy for reading numerals by first counting the number of digits during right-to-left fixations, and only then forming numeric representation during left-to-right fixations. Eye movement findings were aligned with this strategy usage, as long numerals were often read with deliberate forward-and-backward fixation sequences. In these sequences fixations preceding saccades to default reading direction were shorter than against it, suggesting that different cognitive processes such as counting and formation of numeric representation were involved in fixations preceding left- and right-directed saccades. Finally, the change against the default reading direction was preceded by highly inflated fixation duration, pinpointing the oculomotor, attentional and cognitive demands in executing sudden changes in reading direction. |
Sayed Hossein Khatoonabadi; Ivan V. Bajić; Yufeng Shan Compressed-domain visual saliency models: Acomparative study Journal Article In: Multimedia Tools and Applications, vol. 76, no. 24, pp. 26297–26328, 2017. @article{Khatoonabadi2017, Computational modeling of visual saliency has become an important research problem in recent years, with applications in video quality estimation, video compression, object tracking, retargeting, summarization, and so on. While most visual saliency models for dynamic scenes operate on raw video, several models have been developed for use with compressed-domain information such as motion vectors and transform coefficients. This paper presents a comparative study of eleven such models as well as two high-performing pixel-domain saliency models on two eye-tracking datasets using several comparison metrics. The results indicate that highly accurate saliency estimation is possible based only on a partially decoded video bitstream. The strategies that have shown success in compressed-domain saliency modeling are highlighted, and certain challenges are identified as potential avenues for further improvement. |
Tim C. Kietzmann; Anna L. Gert; Frank Tong; Peter König Representational dynamics of facial viewpoint encoding Journal Article In: Journal of Cognitive Neuroscience, vol. 29, no. 4, pp. 637–651, 2017. @article{Kietzmann2017, Faces provide a wealth of information, including the identity of the seen person and social cues, such as the direction of gaze. Crucially, different aspects of face processing require distinct forms of information encoding. Another person's attentional focus can be derived based on a view-dependent code. In contrast, identification benefits from invariance across all view-points. Different cortical areas have been suggested to subserve these distinct functions. However, little is known about the temporal aspects of differential viewpoint encoding in the human brain. Here, we combine EEG with multivariate data analyses to resolve the dynamics of face processing with high temporal resolution. This revealed a distinct sequence of view-point encoding. Head orientations were encoded first, starting after around 60 msec of processing. Shortly afterward, peaking around 115 msec after stimulus onset, a different encoding scheme emerged. At this latency, mirror-symmetric viewing angles elicited highly similar cortical responses. Finally, about 280 msec after visual onset, EEG response patterns demon-strated a considerable degree of viewpoint invariance across all viewpoints tested, with the noteworthy exception of the front-facing view. Taken together, our results indicate that the processing of facial viewpoints follows a temporal sequence of encoding schemes, potentially mirroring different levels of computational complexity. |
Dongho Kim; Savannah Lokey; Sam Ling Elevated arousal levels enhance contrast perception Journal Article In: Journal of Vision, vol. 17, no. 2, pp. 1–10, 2017. @article{Kim2017a, Our state of arousal fluctuates from moment to moment—fluctuations that can have profound impacts on behavior. Arousal has been proposed to play a powerful, widespread role in the brain, influencing processes as far ranging as perception, memory, learning, and decision making. Although arousal clearly plays a critical role in modulating behavior, the mechanisms underlying this modulation remain poorly understood. To address this knowledge gap, we examined the modulatory role of arousal on one of the cornerstones of visual perception: contrast perception. Using a reward-driven paradigm to manipulate arousal state, we discovered that elevated arousal state substantially enhances visual sensitivity, incurring a multiplicative modulation of contrast response. Contrast defines vision, determining whether objects appear visible or invisible to us, and these results indicate that one of the consequences of decreased arousal state is an impaired ability to visually process our environment. |
Nam Wook Kim; Zoya Bylinskii; Michelle A. Borkin; Krzysztof Z. Gajos; Aude Oliva; Fredo Durand; Hanspeter Pfister BubbleView: An interface for crowdsourcing image importance maps and tracking visual attention Journal Article In: ACM Transactions on Computer-Human Interaction, vol. 24, no. 5, pp. 1–40, 2017. @article{Kim2017, In this paper, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine. BubbleView is a mouse-contingent, moving-window interface in which participants are presented with a series of blurred images and click to reveal "bubbles" - small, circular areas of the image at original resolution, similar to having a confined area of focus like the eye fovea. Across 10 experiments with 28 different parameter combinations, we evaluated BubbleView on a variety of image types: information visualizations, natural images, static webpages, and graphic designs, and compared the clicks to eye fixations collected with eye-trackers in controlled lab settings. We found that BubbleView clicks can both (i) successfully approximate eye fixations on different images, and (ii) be used to rank image and design elements by importance. BubbleView is designed to collect clicks on static images, and works best for defined tasks such as describing the content of an information visualization or measuring image importance. BubbleView data is cleaner and more consistent than related methodologies that use continuous mouse movements. Our analyses validate the use of mouse-contingent, moving-window methodologies as approximating eye fixations for different image and task types. |
Sujin Kim; Randolph Blake; Minyoung Lee; Chai-Youn Kim Audio-visual interactions uniquely contribute to resolution of visual conflict in people possessing absolute pitch Journal Article In: PLoS ONE, vol. 12, no. 4, pp. e0175103, 2017. @article{Kim2017b, Individuals possessing absolute pitch (AP) are able to identify a given musical tone or to reproduce it without reference to another tone. The present study sought to learn whether this exceptional auditory ability impacts visual perception under stimulus conditions that provoke visual competition in the form of binocular rivalry. Nineteen adult participants with 3–19 years of musical training were divided into two groups according to their performance on a task involving identification of the specific note associated with hearing a given musical pitch. During test trials lasting just over half a minute, participants dichoptically viewed a scrolling musical score presented to one eye and a drifting sinusoidal grating presented to the other eye; throughout the trial they pressed buttons to track the alternations in visual awareness produced by these dissimilar monocular stimuli. On “pitch-congruent” trials, participants heard an auditory melody that was congruent in pitch with the visual score, on “pitch-incongruent” trials they heard a transposed auditory melody that was congruent with the score in melody but not in pitch, and on “melody-incongruent” trials they heard an auditory melody completely different from the visual score. For both groups, the visual musical scores predominated over the gratings when the auditory melody was congruent compared to when it was incongruent. Moreover, the AP participants experienced greater predominance of the visual score when it was accompanied by the pitch-congruent melody compared to the same melody transposed in pitch; for non-AP musicians, pitch-congruent and pitch-incongruent trials yielded equivalent predominance. Analysis of individual durations of dominance revealed differential effects on dominance and suppression durations for AP and non-AP participants. These results reveal that AP is accompanied by a robust form of bisensory interaction between tonal frequencies and musical notation that boosts the salience of a visual score. |
Sara Iacozza; Albert Costa; Jon Andoni Duñabeitia What do your eyes reveal about your foreign language? Reading emotional sentences in a native and foreign language Journal Article In: PLoS ONE, vol. 12, no. 10, pp. e0186027, 2017. @article{Iacozza2017, Foreign languages are often learned in emotionally neutral academic environments which differ greatly from the familiar context where native languages are acquired. This difference in learning contexts has been argued to lead to reduced emotional resonance when confronted with a foreign language. In the current study, we investigated whether the reactivity of the sympathetic nervous system in response to emotionally-charged stimuli is reduced in a foreign language. To this end, pupil sizes were recorded while reading aloud emotional sentences in the native or foreign language. Additionally, subjective ratings of emotional impact were provided after reading each sentence, allowing us to further investigate foreign language effects on explicit emotional understanding. Pupillary responses showed a larger effect of emotion in the native than in the foreign language. However, such a difference was not present for explicit ratings of emotionality. These results reveal that the sympathetic nervous system reacts differently depending on the language context, which in turns suggests a deeper emotional processing when reading in a native compared to a foreign language. |
Guilhem Ibos; David J. Freedman Sequential sensory and decision processing in posterior parietal cortex Journal Article In: eLife, vol. 6, pp. 1–19, 2017. @article{Ibos2017, <p>Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for).</p> |
Jaime S. Ide; Hsiang C. Tung; Cheng-Ta Yang; Yuan-Chi Tseng; Chiang-Shan R. Li In: Frontiers in Human Neuroscience, vol. 11, pp. 222, 2017. @article{Ide2017, Impulsivity is a personality trait of clinical importance. Extant research focuses on frontostriatal mechanisms of impulsivity and how executive functions are compromised in impulsive individuals. Imaging studies employing voxel based morphometry highlighted impulsivity-related changes in gray matter concentrations in a wide array of cerebral structures. In particular, whereas prefrontal cortical areas appear to show structural alterations in individuals with a neuropsychiatric condition, the findings are less than consistent in the healthy population. Here, in a sample (n = 113) of young adults assessed for Barratt impulsivity, we controlled for age, gender and alcohol use, and showed that higher impulsivity score is associated with increased gray matter volume (GMV) in bilateral medial parietal and occipital cortices known to represent the peripheral visual field. When impulsivity components were assessed, we observed that this increase in parieto-occipital cortical volume is correlated with inattention and non-planning but not motor subscore. In a separate behavioral experiment of 10 young adults, we demonstrated that impulsive individuals are more vulnerable to the influence of a distractor on target detection in an attention task. If replicated, these findings together suggest aberrant visual attention as a neural correlate of an impulsive personality trait in neurotypical individuals and need to be reconciled with the literature that focuses on frontal dysfunctions. |
Jessica L. Irons; Tamara Gradden; Angel Zhang; Xuming He; Nick Barnes; Adele F. Scott; Elinor McKone Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing Journal Article In: Vision Research, vol. 137, pp. 61–79, 2017. @article{Irons2017a, The visual prosthesis (or “bionic eye”) has become a reality but provides a low resolution view of the world. Simulating prosthetic vision in normal-vision observers, previous studies report good face recognition ability using tasks that allow recognition to be achieved on the basis of information that survives low resolution well, including basic category (sex, age) and extra-face information (hairstyle, glasses). Here, we test within-category individuation for face-only information (e.g., distinguishing between multiple Caucasian young men with hair covered). Under these conditions, recognition was poor (although above chance) even for a simulated 40 × 40 array with all phosphene elements assumed functional, a resolution above the upper end of current-generation prosthetic implants. This indicates that a significant challenge is to develop methods to improve face identity recognition. Inspired by “bionic ear” improvements achieved by altering signal input to match high-level perceptual (speech) requirements, we test a high-level perceptual enhancement of face images, namely face caricaturing (exaggerating identity information away from an average face). Results show caricaturing improved identity recognition in memory and/or perception (degree by which two faces look dissimilar) down to a resolution of 32 × 32 with 30% phosphene dropout. Findings imply caricaturing may offer benefits for patients at resolutions realistic for some current-generation or in-development implants. |
Jessica L. Irons; Minjeong Jeon; Andrew B. Leber Pre-stimulus pupil dilation and the preparatory control of attention Journal Article In: PLoS ONE, vol. 12, no. 12, pp. e0188787, 2017. @article{Irons2017, Task preparation involves multiple component processes, including a general evaluative process that signals the need for adjustments in control, and the engagement of task-specific control settings. Here we examined the dynamics of these different mechanisms in preparing the attentional control system for visual search. We explored preparatory activity using pupil dilation, a well-established measure of task demands and effortful processing. In an initial exploratory experiment, participants were cued at the start of each trial to search for either a salient color singleton target (an easy search task) or a low-salience shape singleton target (a difficult search task). Pupil dilation was measured during the preparation period from cue onset to search display onset. Mean dilation was larger in preparation for the difficult shape target than the easy color target. In two additional experiments, we sought to vary effects of evaluative processing and task-specific preparation separately. Experiment 2 showed that when the color and shape search tasks were matched for difficulty, the shape target no longer evoked larger dilations, and the pattern of results was in fact reversed. In Experiment 3, we manipulated difficulty within a single feature dimension, and found that the difficult search task evoked larger dilations. These results suggest that pupil dilation reflects expectations of difficulty in preparation for a search task, consistent with the activity of an evaluative mechanism. We did not find consistent evidence for relationship between pupil dilation and search performance (accuracy and response timing), suggesting that pupil dilation during search preparation may not be strongly linked to ongoing task-specific preparation. |
Roxane J. Itier; Karly N. Neath-Tavares Effects of task demands on the early neural processing of fearful and happy facial expressions Journal Article In: Brain Research, vol. 1663, pp. 38–50, 2017. @article{Itier2017, Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350 ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350 ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350 ms of visual processing. |
Miho Iwasaki; Kodai Tomita; Yasuki Noguchi Non-uniform transformation of subjective time during action preparation Journal Article In: Cognition, vol. 160, pp. 51–61, 2017. @article{Iwasaki2017, Although many studies have reported a distortion of subjective (internal) time during preparation and execution of actions, it is highly controversial whether actions cause a dilation or compression of time. In the present study, we tested a hypothesis that the previous controversy (dilation vs. compression) partly resulted from a mixture of two types of sensory inputs on which a time length was estimated; some studies asked subjects to measure the time of presentation for a single continuous stimulus (stimulus period, e.g. the duration of a long-lasting visual stimulus on a monitor) while others required estimation of a period without continuous stimulations (no-stimulus period, e.g. an inter-stimulus interval between two flashes). Results of our five experiments supported this hypothesis, showing that action preparation induced a dilation of a stimulus period, whereas a no-stimulus period was not subject to this dilation and sometimes can be compressed by action preparation. Those results provided a new insight into a previous view assuming a uniform dilation or compression of subjective time by actions. Our findings about the distinction between stimulus and no-stimulus periods also might contribute to a resolution of mixed results (action-induced dilation vs. compression) in a previous literature. |
Syaheed B. Jabar; Alex Filipowicz; Britt Anderson In: Attention, Perception, and Psychophysics, vol. 79, no. 8, pp. 2338–2353, 2017. @article{Jabar2017, When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute. |
Syaheed B. Jabar; Alex Filipowicz; Britt Anderson Tuned by experience: How orientation probability modulates early perceptual processing Journal Article In: Vision Research, vol. 138, pp. 86–96, 2017. @article{Jabar2017a, Probable stimuli are more often and more quickly detected. While stimulus probability is known to affect decision-making, it can also be explained as a perceptual phenomenon. Using spatial gratings, we have previously shown that probable orientations are also more precisely estimated, even while participants remained naive to the manipulation. We conducted an electrophysiological study to investigate the effect that probability has on perception and visual-evoked potentials. In line with previous studies on oddballs and stimulus prevalence, low-probability orientations were associated with a greater late positive ‘P300' component which might be related to either surprise or decision-making. However, the early ‘C1' component, thought to reflect V1 processing, was dampened for high-probability orientations while later P1 and N1 components were unaffected. Exploratory analyses revealed a participant-level correlation between C1 and P300 amplitudes, suggesting a link between perceptual processing and decision-making. We discuss how these probability effects could be indicative of sharpening of neurons preferring the probable orientations, due either to perceptual learning, or to feature-based attention. |
Stephanie Jainta; Mirela Nikolova; Simon P. Liversedge Does text contrast mediate binocular advantages in reading? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 1, pp. 55–68, 2017. @article{Jainta2017, Humans typically make use of both of their eyes in reading and efficient processes of binocular vision provide a stable, single percept of the text. Binocular reading also comes with an advantage: reading speed is high and word frequency effects (i.e., faster lexical processing of words that are more often encountered in a language) emerge during fixations, which is not the case for monocular reading (Jainta, Blythe, & Liversedge, 2014). A potential contributor to this benefit is the reduced contrast in monocular reading: reduced text contrasts in binocular reading are known to slow down reading and word identification (Reingold & Rayner, 2006). To investigate whether contrast reduction mediates the binocular advantage, we first replicated increased reading time and nullified frequency effects for monocular reading (Experiment 1). Next, we reduced the contrast for binocular but whole sentences to 70% (Weber-contrast); this reading condition resembled monocular reading, but found no effect on reading speed and word identification (Experiment 2). A reasonable conclusion, therefore, was that a reduction in contrast is not the (primary) factor that mediates less efficient lexical processing under monocular reading. In a third experiment (Experiment 3) we reduced the sentence contrast to 40% and the pattern of results showed that, globally, reading was slowed down but clear word frequency effects were present in the data. Thus, word identification processes during reading (i.e., the word frequency effect) were qualitatively different in monocular reading compared with effects observed when text was read with substantially reduced contrast. |
Debra Jared; Sarah Bainbridge Reading homophone puns: Evidence from eye tracking Journal Article In: Canadian Journal of Experimental Psychology, vol. 71, no. 1, pp. 2–13, 2017. @article{Jared2017, We investigated how readers make sense of homophone puns (e.g., The butcher was very glad we could meat up) by tracking their eye movements as they read. Comparison sentences included homophone-error sentences in which the presented homophone was also not correct (e.g., The lawyer was very glad we could meat up) and sentences in which the homophone was correct for the context (e.g., The butcher was very glad to chop meat up for the stew). An effect of the frequency of the unpresented homophone mate (e.g., meet) was found on first-pass reading times for homophones, indicating that participants activated the meaning of the homophone mate through shared phonology. First-fixation and gaze durations on the homophones were longer in puns than in correct-context sentences, indicating that participants imme- diately noticed that the homophone was incongruous with the adjacent context (e.g., glad we could meat) in puns, but total reading times did not differ, suggesting that the incongruity was quickly resolved. Immediate reading times on homophone in puns and homophone-error sentences did not differ, but total reading times did, suggesting that the impact of the critical context word (e.g., butcher) is delayed. Further analyses examined the resolution process in more detail. Ratings of the funniness of the puns were most strongly related to the strength of the association between the homophone and the critical context word (e.g., butcher). |
Debra Jared; Katrina O'Donnell Skilled adult readers activate the meanings of high-frequency words using phonology: Evidence from eye tracking Journal Article In: Memory & Cognition, vol. 45, no. 2, pp. 334–346, 2017. @article{Jared2017a, We examined whether highly skilled adult readers activate the meanings of high-frequency words using phonology when reading sentences for meaning. A homophone-error paradigm was used. Sentences were written to fit 1 member of a homophone pair, and then 2 other versions were created in which the homophone was replaced by its mate or a spelling-control word. The error words were all high-frequency words, and the correct homophones were either higher-frequency words or low-frequency words—that is, the homophone errors were either the subordinate or dominant member of the pair. Participants read sentences as their eye movements were tracked. When the high-frequency homophone error words were the subordinate member of the homophone pair, participants had shorter immediate eye-fixation latencies on these words than on matched spelling-control words. In contrast, when the high-frequency homophone error words were the dominant member of the homophone pair, a difference between these words and spelling controls was delayed. These findings provide clear evidence that the meanings of high-frequency words are activated by phonological representations when skilled readers read sentences for meaning. Explanations of the differing patterns of results depending on homophone dominance are discussed. |
Juhani Järvikivi; Roger P. G. Gompel; Jukka Hyönä The interplay of implicit causality, structural heuristics, and anaphor type in ambiguous pronoun resolution Journal Article In: Journal of Psycholinguistic Research, vol. 46, no. 3, pp. 525–550, 2017. @article{Jaervikivi2017, Two visual-world eye-tracking experiments investigating pronoun resolution in Finnish examined the time course of implicit causality information relative to both grammatical role and order-of-mention information. Experiment 1 showed an effect of implicit causality that appeared at the same time as the first-mention preference. Furthermore, when we counterbalanced the semantic roles of the verbs, we found no effect of grammatical role, suggesting the standard observed subject preference has a large semantic component. Experiment 2 showed that both the personal pronoun "hän" and the demonstrative "tämä" preferred the antecedent consistent with the implicit causality bias; "tämä" was not interpreted as referring to the semantically non-prominent entity. In contrast, structural prominence affected "hän" and "tämä" differently: we found a first-mention preference for "hän," but a second-mention preference for "tämä." The results suggest that semantic implicit causality information has an immediate effect on pronoun resolution and its use is not delayed relative to order-of-mention information. Furthermore, they show that order-of-mention differentially affects different types of anaphoric expressions, but semantic information has the same effect. |
Wolfgang Jaschinski Individual objective and subjective fixation disparity in near vision Journal Article In: PLoS ONE, vol. 12, no. 1, pp. e0170190, 2017. @article{Jaschinski2017, Binocular vision refers to the integration of images in the two eyes for improved visual performance and depth perception. One aspect of binocular vision is the fixation disparity, which is a suboptimal condition in individuals with respect to binocular eye movement control and subsequent neural processing. The objective fixation disparity refers to the vergence angle between the visual axes, which is measured with eye trackers. Subjective fixation disparity is tested with two monocular nonius lines which indicate the physical nonius separation required for perceived alignment. Subjective and objective fixation disparity represent the different physiological mechanisms of motor and sensory fusion, but the precise relation between these two is still unclear. This study measures both types of fixation disparity at viewing distances of 40, 30, and 24 cm while observers fixated a central stationary fusion target. 20 young adult subjects with normal binocular vision were tested repeatedly to investigate individual differences. For heterophoria and subjective fixation disparity, this study replicated that the binocular system does not properly adjust to near targets: outward (exo) deviations typically increase as the viewing distance is shortened. This exo proximity effect—however—was not found for objective fixation disparity, which–on the average–was zero. But individuals can have reliable outward (exo) or inward (eso) vergence errors. Cases with eso objective fixation disparity tend to have less exo states of subjective fixation disparity and heterophoria. In summary, the two types of fixation disparity seem to respond in a different way when the viewing distance is shortened. Motor and sensory fusion–as reflected by objective and subjective fixation disparity–exhibit complex interactions that may differ between individuals (eso versus exo) and vary with viewing distance (far versus near vision). |
Su Keun Jeong; Yaoda Xu Task-context-dependent linear representation of multiple visual objects in human parietal cortex Journal Article In: Journal of Cognitive Neuroscience, vol. 29, no. 10, pp. 1778–1789, 2017. @article{Jeong2017, A host of recent studies have reported robust representations of visual object information in the human parietal cortex, similar to those found in ventral visual cortex. In ventral visual cortex, both monkey neurophysiology and human fMRI studies showed that the neural representation ofa pair ofunrelated objects can be approximated by the averaged neural representation of the constituent objects shown in isolation. In this study, we examined whether such a linear relationship between objects exists for object representations in the human parietal cortex. Using fMRI and multivoxel pattern analysis, we examined object representations in human inferior and superior intraparietal sulcus, two parietal regions previously implicated in visual object selection and encoding, respectively. We also examined responses from the lateral occipital region, a ventral object processing area. We obtained fMRI response patterns to object pairs and their constituent objects shown in isolation while participants viewed these objects and performed a 1-back repetition detection task. By measuring fMRI response pattern correlations, we found that all three brain regions contained representations for both single object and object pairs. In the lateral occipital region, the representation for a pair ofobjects could be reliably approximated by the average representation of its constituent objects shown in isolation, replicating previous findings in ventral visual cortex. Such a simple linear relationship, however, was not observed in either parietal region examined. Nevertheless, when we equated the amount of task information present by examining responses from two pairs of objects, we found that representations for the average of two object pairs were indistinguishable in both parietal regions from the average of another two object pairs containing the same four component objects but with a different pairing of the objects (i.e., the average of AB and CD vs. that of AD and CB). Thus, when task information was held consistent, the same linear relationship may govern how multiple independent objects are represented in the human parietal cortex as it does in ventral visual cortex. These findings show that object and task representations coexist in the human parietal cortex and characterize one significant dif- ference of how visual information may be represented in ventral visual and parietal regions. |
Alexandra Jesse; Katja Poellmann; Ying-Yee Kong English listeners use suprasegmental cues to lexical stress early during spoken-word recognition Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 60, pp. 190–198, 2017. @article{Jesse2017, Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results: Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions: Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary- stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. |
Jianrong Jia; Ling Liu; Fang Fang; Huan Luo Sequential sampling of visual objects during sustained attention Journal Article In: PLoS Biology, vol. 15, no. 6, pp. e2001903, 2017. @article{Jia2017b, In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG) and a temporal response function (TRF) approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz) activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest that selective attention, in addition to the classically posited attentional “focus,” involves a dynamic mechanism for monitoring all objects outside of the focus. Our findings also suggest that attention implements a space (object)-to-time transformation by acting as a series of concatenating attentional chunks that operate on 1 object at a time. |