All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2016 |
Christian P. Janssen; Preeti Verghese Training eye movements for visual search in individuals with macular degeneration Journal Article In: Journal of Vision, vol. 16, no. 15, pp. 1–20, 2016. @article{Janssen2016, We report a method to train individuals with central field loss due to macular degeneration to improve the efficiency of visual search. Our method requires participants to make a same/different judgment on two simple silhouettes. One silhouette is presented in an area that falls within the binocular scotoma while they are fixating the center of the screen with their preferred retinal locus (PRL); the other silhouette is presented diametrically opposite within the intact visual field. Over the course of 480 trials (approximately 6 hr), we gradually reduced the amount of time that participants have to make a saccade and judge the similarity of stimuli. This requires that they direct their PRL first toward the stimulus that is initially hidden behind the scotoma. Results from nine participants show that all participants could complete the task faster with training without sacrificing accuracy on the same/different judgment task. Although a majority of participants were able to direct their PRL toward the initially hidden stimulus, the ability to do so varied between participants. Specifically, six of nine participants made faster saccades with training. A smaller set (four of nine) made accurate saccades inside or close to the target area and retained this strategy 2 to 3 months after training. Subjective reports suggest that training increased awareness of the scotoma location for some individuals. However, training did not transfer to a different visual search task. Nevertheless, our study suggests that increasing scotoma awareness and training participants to look toward their scotoma may help them acquire missing information. |
Debra Jared; Jane Ashby; Stephen J. Agauas; Betty Ann Levy Phonological activation of word meanings in grade 5 readers Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 4, pp. 524–541, 2016. @article{Jared2016, Three experiments examined the role of phonology in the activation of word meanings in Grade 5 students. In Experiment 1, homophone and spelling control errors were embedded in a story context and participants performed a proofreading task as they read for meaning. For both good and poor readers, more homophone errors went undetected than spelling control errors. In Experiments 2 and 3, homophone and spelling control errors were in sentence contexts. Experiment 2 used an online sentence verification task, and found that both good and poor readers were less accurate when sentences contained a homophone error than a spelling control error. Furthermore, a difference between the 2 types of sentences was observed even when participants were concurrently performing an articulation task. In Experiment 3, initial reading times were shorter on homophone errors than on spelling controls, and participants were less likely to make a regression from homophone errors than spelling controls. These experiments provide clear evidence that phonology makes an important contribution to the activation of word meanings in Grade 5 readers. |
Srikant Jayaraman; Raymond M. Klein; Matthew D. Hilchey; Gouri Shanker Patil; Ramesh Kumar Mishra Spatial gradients of oculomotor inhibition of return in deaf and normal adults Journal Article In: Experimental Brain Research, vol. 234, no. 1, pp. 323–330, 2016. @article{Jayaraman2016, We explored the effect of deafness on the spatial (gradient) and temporal (decay) properties of oculomotor inhibition of return (IOR) using a task developed by Vaughan (Theoretical and applied aspects of eye movement research. Elsevier, North Holland, pp 143-150, 1984) in which participants made a sequence of saccades to carefully placed targets . Unlike IOR tasks in which ignored cues are used to explore the aftereffects of covert orienting, this task better approximates real-world behavior in which participants are free to make eye movements to potentially relevant inputs. Because IOR is a bias against returning attention and gaze to a previously attended location, we expected to find, and we did find, slower saccades toward previously fixated locations. Replicating Vaughan, a gradient of inhibition around a previously fixated location was observed and this inhibition began to decay after 1200 ms. Importantly, there were no significant differences between the deaf and the normal hearing subjects, on neither the magnitude of oculomotor IOR, nor its decay over time, nor its gradient around the previously fixated location . |
Laurence C. Jayet Bray; Sonia Bansal; Wilsaan M. Joiner Quantifying the spatial extent of the corollary discharge benefit to transsaccadic visual perception Journal Article In: Journal of Neurophysiology, vol. 115, no. 3, pp. 1132–1145, 2016. @article{JayetBray2016, Extraretinal information, such as corollary discharge (CD), is hypothesized to help compensate for saccade-induced visual input disruptions. However, support for this hypothesis is largely for one-dimensional transsaccadic visual changes, with little comprehensive information on the spatial characteristics. Here we systematically mapped the two-dimensional extent of this compensation by quantifying the insensitivity to different displacement metrics. Human subjects made saccades to targets positioned at different amplitudes (4° or 8°) and directions (rightward, oblique, or upward). After the saccade the initial target disappeared and, after a blank period, reappeared at a shifted location-a collinear, diagonal, or orthogonal displacement. Subjects reported the perceived shift direction, and we determined the displacement detection based on the perceptual judgments. The two-dimensional insensitivity fields resulting from the perceptual thresholds had spatial features similar to the saccadic eye movement variability: 1) scaled with movement amplitude, 2) oriented (less sensitive to the change) along the saccade vector, and 3) approximately constant in shape when normalized by movement amplitude. In addition, comparing the postsaccadic perceptual estimate of the presaccadic target location to that based solely on the postsaccade visual error showed that overall the perceptual estimate was approximately 50% more accurate and 35% less variable than estimates based solely on this visual information. However, this relationship was not uniform: The benefit of extraretinal information was observed largely for displacements with a component parallel to the saccade vector. These results suggest a graded use of extraretinal information when forming the postsaccadic perceptual evaluation of transsaccadic environmental changes. |
Su Keun Jeong; Yaoda Xu The impact of top-down spatial attention on laterality and hemispheric asymmetry in the human parietal cortex Journal Article In: Journal of Vision, vol. 16, no. 10, pp. 1–21, 2016. @article{Jeong2016, The human parietal cortex exhibits a preference to contralaterally presented visual stimuli (i.e., laterality) as well as an asymmetry between the two hemispheres with the left parietal cortex showing greater laterality than the right. Using visual short-term memory and perceptual tasks and varying target location predictability, this study examined whether hemispheric laterality and asymmetry are fixed characteristics of the human parietal cortex or whether they are dynamic and modulated by the deployment of top-down attention to the target present hemifield. Two parietal regions were examined here that have previously been shown to be involved in visual object individuation and identification and are located in the inferior and superior intraparietal sulcus (IPS), respectively. Across three experiments, significant laterality was found in both parietal regions regardless of attentional modulation with laterality being greater in the inferior than superior IPS, consistent with their roles in object individuation and identification, respectively. Although the deployment of top-down attention had no effect on the superior IPS, it significantly increased laterality in the inferior IPS. The deployment of top-down spatial attention can thus amplify the strength of laterality in the inferior IPS. Hemispheric asymmetry, on the other hand, was absent in both brain regions and only emerged in the inferior but not the superior IPS with the deployment of top-down attention. Interestingly, the strength of hemispheric asymmetry significantly correlated with the strength of laterality in the inferior IPS. Hemispheric asymmetry thus seems to only emerge when there is a sufficient amount of laterality present in a brain region. |
Danique Jeurissen; Matthew W. Self; Pieter R. Roelfsema Serial grouping of 2D-image regions with object-based attention in humans Journal Article In: eLife, vol. 5, pp. 1–22, 2016. @article{Jeurissen2016, After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. |
Yu-Cin Jian Fourth graders' cognitive processes and learning strategies for reading illustrated biology texts: Eye movement measurements Journal Article In: Reading Research Quarterly, vol. 51, no. 1, pp. 93–109, 2016. @article{Jian2016, Previous research suggests that multiple representations can improve sci- ence reading comprehension. This facilitation effect is premised on the observation that readers can efficiently integrate information in text and diagram formats; however, this effect in young readers is still contested. Using eye- tracking technology and sequential analysis, this study investi- gated students' reading strategies and comprehension of illustrated biology texts in relation to adult readers' performance. The target population was fourth- grade students with high reading ability, and the control group was university students. All participants read a biology article from an elemen- tary school science textbook containing two illustrations, one representa- tional and one decorative. After the reading task, participants answered questions on recognition, textual, and illustration items. Unsurprisingly, the university students outperformed the younger students on all tests; how- ever, more interestingly, eye movement patterns differed across the two groups. The adult readers demonstrated bidirectional reading pathways for both text and illustrations, whereas the fourth graders' eye fixations only went back and forth within paragraphs in the text and between the illustrations, but made fewer references to both text and illustration. This suggests that regardless of their high reading ability, fourth-grade students' visual literacy is not mature enough to perceive connections between corresponding features of different representations crucial to reading comprehension. Despite differences in cognitive processes between adult readers and young readers, high-ability young readers still have certain capabilities in reading comprehension. The results of sequential analysis showed that they looked back to previous paragraphs frequently, indicating that they were monitoring their comprehension. |
Yu-Cin Jian; Chao-Jung Wu In: Computers in Human Behavior, vol. 61, pp. 622–632, 2016. @article{Jian2016a, Eye-tracking technology can reflect readers' sophisticated cognitive processes and explain the psychological meanings of reading to some extent. This study investigated the function of diagrams with numbered arrows and illustrated text in conveying the kinematic information of machine operation by recording readers' eye movements and reading tests. Participants read two diagrams depicting how a flushing system works with or without numbered arrows. Then, they read an illustrated text describing the system. The results showed the arrow group significantly outperformed the non-arrow group on the step-by-step test after reading the diagrams, but this discrepancy was reduced after reading the illustrated text. Also, the arrow group outperformed the non-arrow group on the troubleshooting test measuring problem solving. Eye movement data showed the arrow group spent less time reading the diagram and text which conveyed less complicated concept than the non-arrow group, but both groups allocated considerable cognitive resources on complicated diagram and sentences. Overall, this study found learners were able to construct less complex kinematic representation after reading static diagrams with numbered arrows, whereas constructing a more complex kinematic representation needed text information. Another interesting finding was kinematic information conveyed via diagrams is independent of that via text on some areas. |
Stephanie Y. Chen; Brian H. Ross; Gregory L. Murphy Eyetracking reveals multiple-category use in induction Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 7, pp. 1050–1067, 2016. @article{Chen2016b, Category information is used to predict properties of new category members. When categorization is uncertain, people often rely on only one, most likely category to make predictions. Yet studies of perception and action often conclude that people combine multiple sources of information nearoptimally. We present a perception-action analog of category-based induction using eye movements as a measure of prediction. The categories were objects of different shapes that moved in various directions. Experiment 1 found that people integrated information across categories in predicting object motion. The results of Experiment 2 suggest that the integration of information found in Experiment 1 were not a result of explicit strategies. Experiment 3 tested the role of explicit categorization, finding that making a categorization judgment, even an uncertain one, stopped people from using multiple categories in our eye-movement task. Experiment 4 found that induction was indeed based on category-level predictions rather than associations between object properties and directions. |
Zailiang Chen; Huajie Huang; Hailan Shen; Beiji Zou; Jiang Wang ROI extraction based on visual salience and visual evaluation Journal Article In: International Journal of Autonomous and Adaptive Communications Systems, vol. 9, no. 1/2, pp. 57, 2016. @article{Chen2016a, With saliency map generated from visual attention model, this paper proposes two regions of interest (ROI) extraction algorithms respectively based on salient points and saliency regions. The former one adopts statistical and clustering techniques, selects cluster centre as seed points to fill outline map of image and finally makes mask operation between filled outline map and input image to implement ROI extraction. The latter one is based on salient regions, uses improved Grabcut image segmentation algorithm and saliency map generated from visual attention model to implement ROI extraction. To evaluate the performance of two proposed algorithms, this paper uses extracted ROI based on eye-movement data as evaluation criterion. The results show the algorithm based on salient points is applicable to extract simple images and has less runtime. The algorithm based on saliency regions is applicable to extract colourful and complex image. These two algorithms can be combined to get better performance. |
Kyoung Whan Choe; Randolph Blake; Sang-Hun Lee Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation Journal Article In: Vision Research, vol. 118, pp. 48–59, 2016. @article{Choe2016, Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16 s) and large (~4 min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. |
Hak Soo Choi; Shinjung Kim; Donghoon Lee; Chang Seok Kim; Myung Yung Jeong Synchronized tracking of brain cognitive processing using EEG and vision signals Journal Article In: Applied Spectroscopy Reviews, vol. 51, no. 7-9, pp. 592–602, 2016. @article{Choi2016, Many efforts have been made to understand the neural mechanisms of the human brain. However, visualization of human brain processing has been a main challenge in the field. It is still largely unknown how the human brain allocates attention to target objects while excluding unrelated information in a complex visual environment. Using simultaneous electroencephalogram and eye tracking measurements, in this study, we analyzed two brain regions separately to detect the brain wave activity during visual information processing. We observed an activation difference between sensory (P100) and cognitive (P300) processing, and the behavioral response was improved by providing valid cue-target location information. Furthermore, neural processing was evaluated according to the specific area of brain activation and eye movements during cognitive processing. Our results demonstrate the correlation between behavior performance and visual stimuli and suggest an advantage of combined paradigms for efficient visual information processing. |
Woo Young Choi; Jayalakshmi Viswanathan; Jason J. S. Barton The temporal dynamics of the distractor in the global effect Journal Article In: Experimental Brain Research, vol. 234, no. 9, pp. 2457–2463, 2016. @article{Choi2016a, In the global effect, saccades are displaced towards a distractor if the latter is near to the target, an effect thought to reflect spatial averaging in neurons of the superior colliculus. The temporal dynamics of the global effect have not been well studied, however. We had twelve subjects perform horizontal saccades to a target in trials in which there were either no distractor or a distractor stim- ulus located 20° above or below the target. The distractor appeared either simultaneously with the target or preceded it by an interval of between 100 and 800 ms, and was either flashed for only 100 ms or remained visible until the sub- ject responded with a saccade. Both flashed and persistent distractors reduced saccadic latency if they preceded target onset, indicating that subjects could use this cue to pre- pare saccades in advance. Saccadic endpoint was displaced towards a flashed distractor only if it was simultaneous with the target. However, persistent distractors produced a global effect for both simultaneous presentation and dis- tractor–target intervals of 100 ms, but not for longer inter- vals. We conclude that the global effect requires of the dis- tractor both a recent onset and persistence of the distractor, and that distractor-related activity decays rapidly within 300 ms. |
Heeyoung Choo; Dirk B. Walther In: NeuroImage, vol. 135, pp. 32–44, 2016. @article{Choo2016, Humans efficiently grasp complex visual environments, making highly consistent judgments of entry-level category despite their high variability in visual appearance. How does the human brain arrive at the invariant neural representations underlying categorization of real-world environments? We here show that the neural representation of visual environments in scene-selective human visual cortex relies on statistics of contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene. We manipulated line drawings of real-world environments such that statistics of contour orientations or junctions were disrupted. Manipulated and intact line drawings were presented to participants in an fMRI experiment. Scene categories were decoded from neural activity patterns in the parahippocampal place area (PPA), the occipital place area (OPA) and other visual brain regions. Disruption of junctions but not orientations led to a drastic decrease in decoding accuracy in the PPA and OPA, indicating the reliance of these areas on intact junction statistics. Accuracy of decoding from early visual cortex, on the other hand, was unaffected by either image manipulation. We further show that the correlation of error patterns between decoding from the scene-selective brain areas and behavioral experiments is contingent on intact contour junctions. Finally, a searchlight analysis exposes the reliance of visually active brain regions on different sets of contour properties. Statistics of contour length and curvature dominate neural representations of scene categories in early visual areas and contour junctions in high-level scene-selective brain regions. |
Katri K. Cornelissen; Piers L. Cornelissen; Peter J. B. Hancock; Martin J. Tovée Fixation patterns, not clinical diagnosis, predict body size over-estimation in eating disordered women and healthy controls Journal Article In: International Journal of Eating Disorders, vol. 49, no. 5, pp. 507–518, 2016. @article{Cornelissen2016, OBJECTIVE: A core feature of anorexia nervosa (AN) is an over-estimation of body size. Women with AN have a different pattern of eye-movements when judging bodies, but it is unclear whether this is specific to their diagnosis or whether it is found in anyone over-estimating body size. METHOD: To address this question, we compared the eye movement patterns from three participant groups while they carried out a body size estimation task: (i) 20 women with recovering/recovered anorexia (rAN) who had concerns about body shape and weight and who over-estimated body size, (ii) 20 healthy controls who had normative levels of concern about body shape and who estimated body size accurately (iii) 20 healthy controls who had normative levels of concern about body shape but who did over-estimate body size. RESULTS: Comparisons between the three groups showed that: (i) accurate body size estimators tended to look more in the waist region, and this was independent of clinical diagnosis; (ii) there is a pattern of looking at images of bodies, particularly viewing the upper parts of the torso and face, which is specific to participants with rAN but which is independent of accuracy in body size estimation. DISCUSSION: Since the over-estimating controls did not share the same body image concerns that women with rAN report, their over-estimation cannot be explained by attitudinal concerns about body shape and weight. These results suggest that a distributed fixation pattern is associated with over-estimation of body size and should be addressed in treatment programs. |
Jason C. Coronel; Kara D. Federmeier In: Communication Research, vol. 43, no. 7, pp. 922–944, 2016. @article{Coronel2016, Gender-based political stereotypes pervade the media environment in the United States, and this may cause voters to automatically activate these stereotypes while evaluating politicians. In the research reported here, we investigate whether voters are able to reduce the automatic activation of unwanted stereotypes and how political sophistication influences this capacity. The current experiment uses self-reports to measure controlled stereotyping, and we develop a new eye movement metric to measure automatic stereotyping. We find that political sophisticates are more effective than novices at reducing unwanted gender-based political stereotypes. This study has two main implications for communication research. First, the results suggest that the effects of gender-based automatic stereotyping—induced by the information environment—on political judgments may not be as powerful as some of the current literature portrays them to be. Second, this study adds eye movements to the arsenal of tools available to communication scholars interested in measuring covert forms of stereotyping. |
M. Gabriela Costello; Dantong Zhu; Paul J. May; Emilio Salinas; Terrence R. Stanford Task dependence of decision- and choice-related activity in monkey oculomotor thalamus Journal Article In: Journal of Neurophysiology, vol. 115, no. 1, pp. 581–601, 2016. @article{Costello2016, Oculomotor signals circulate within putative recurrent feedback loops that include the frontal eye field (FEF) and the oculomotor thalamus (OcTh). To examine how OcTh contributes to visuomotor control, and perceptually informed saccadic choices in particular, neural correlates of perceptual judgment and motor selection in OcTh were evaluated and compared with those previously reported for FEF in the same subjects. Monkeys performed three tasks: a choice task in which perceptual decisions are urgent, a choice task in which identical decisions are made without time pressure, and a single-target, delayed saccade task. The OcTh yielded far fewer task-responsive neurons than the FEF, but across responsive pools, similar neuron types were found, ranging from purely visual to purely saccade related. Across such types, the impact of the perceptual information relevant to saccadic choices was qualitatively the same in FEF and OcTh. However, distinct from that in FEF, activity in OcTh was strongly task dependent, typically being most vigorous in the urgent task, less so in the easier choice task, and least in the single-target task. This was true for responsive and nonresponsive cells alike. Neurons with exclusively motor-related activity showed strong task dependence, fired less, and differed most patently from their FEF counterparts, whereas those that combined visual and motor activity fired most similarly to their FEF counterparts. The results suggest that OcTh activity is more distantly related to saccade production per se, because its degree of commitment to a motor choice varies markedly as a function of ongoing cognitive or behavioral demands. |
Antoine Coutrot; Nicola Binetti; Charlotte Harrison; Isabelle Mareschal; Alan Johnston Face exploration dynamics differentiate men and women Journal Article In: Journal of Vision, vol. 16, no. 14, pp. 1–19, 2016. @article{Coutrot2016, The human face is central to our everyday social interactions. Recent studies have shown that while gazing at faces, each one of us has a particular eye-scanning pattern, highly stable across time. Although variables such as culture or personality have been shown to modulate gaze behavior, we still don't know what shapes these idiosyncrasies. Moreover, most previous observations rely on static analyses of small-sized eye-position data sets averaged across time. Here, we probe the temporal dynamics of gaze to explore what information can be extracted about the observers and what is being observed. Controlling for any stimuli effect, we demonstrate that among many individual characteristics, the gender of both the participant (gazer) and the person being observed (actor) are the factors that most influence gaze patterns during face exploration. We record and exploit the largest set of eye-tracking data (405 participants, 58 nationalities) from participants watching videos of another person. Using novel data-mining techniques, we show that female gazers follow a much more exploratory scanning strategy than males. Moreover, female gazers watching female actresses look more at the eye on the left side. These results have strong implications in every field using gaze-based models from computer vision to clinical psychology. |
Hayley Crawford; Joanna Moss; Chris Oliver; Natasha Elliott; Giles M. Anderson; Joseph P. McCleery Visual preference for social stimuli in individuals with autism or neurodevelopmental disorders: An eye-tracking study Journal Article In: Molecular Autism, vol. 7, no. 1, pp. 1–12, 2016. @article{Crawford2016, Background: Recent research has identified differences in relative attention to competing social versus non-social video stimuli in individuals with autism spectrum disorder (ASD). Whether attentional allocation is influenced by the potential threat of stimuli has yet to be investigated. This is manipulated in the current study by the extent to which the stimuli are moving towards or moving past the viewer. Furthermore, little is known about whether such differences exist across other neurodevelopmental disorders. This study aims to determine if adolescents with ASD demonstrate differences in attentional allocation to competing pairs of social and non-social video stimuli, where the actor or object either moves towards or moves past the viewer, in comparison to individuals without ASD, and to determine if individuals with three genetic syndromes associated with differing social phenotypes demonstrate differences in attentional allocation to the same stimuli. Methods: In study 1, adolescents with ASD and control participants were presented with social and non-social video stimuli in two formats (moving towards or moving past the viewer) whilst their eye movements were recorded. This paradigm was then employed with groups of individuals with fragile X, Cornelia de Lange, and Rubinstein-Taybi syndromes who were matched with one another on chronological age, global adaptive behaviour, and verbal adaptive behaviour (study 2). Results: Adolescents with ASD demonstrated reduced looking-time to social versus non-social videos only when stimuli were moving towards them. Individuals in the three genetic syndrome groups showed similar looking-time but differences in fixation latency for social stimuli moving towards them. Across both studies, we observed within- and between-group differences in attention to social stimuli that were moving towards versus moving past the viewer. Conclusions: Taken together, these results provide strong evidence to suggest differential visual attention to competing social versus non-social video stimuli in populations with clinically relevant, genetically mediated differences in socio-behavioural phenotypes. |
Sarah C. Creel; Dolly P. Rojo; Angelica Nicolle Paullada Effects of contextual support on preschoolers' accented speech comprehension Journal Article In: Journal of Experimental Child Psychology, vol. 146, pp. 156–180, 2016. @article{Creel2016, Young children often hear speech in unfamiliar accents, but relatively little research characterizes their comprehension capacity. The current study tested preschoolers' comprehension of familiar-accented versus unfamiliar-accented speech with varying levels of contextual support from sentence frames (full sentences vs. isolated words) and from visual context (four salient pictured alternatives vs. the absence of salient visual referents). The familiar accent advantage was more robust when visual context was absent, suggesting that previous findings of good accent comprehension in infants and young children may result from ceiling effects in easier tasks (e.g., picture fixation, picture selection) relative to the more difficult tasks often used with older children and adults. In contrast to prior work on mispronunciations, where most errors were novel object responses, children in the current study did not select novel object referents above chance levels. This suggests that some property of accented speech may dissuade children from inferring that an unrecognized familiar-but-accented word has a novel referent. Finally, children showed detectable accent processing difficulty despite presumed incidental community exposure. Results suggest that preschoolers' accented speech comprehension is still developing, consistent with theories of protracted development of speech processing. Copyright ©2016 Elsevier Inc. All rights reserved. |
Frédéric Crevecoeur; Douglas P. Munoz; Stephen H. Scott Dynamic multisensory integration: Somatosensory speed trumps visual accuracy during feedback control Journal Article In: Journal of Neuroscience, vol. 36, no. 33, pp. 8598–8611, 2016. @article{Crevecoeur2016, Recent advances in movement neuroscience have consistently highlighted that the nervous system performs sophisticated feedback control over very short time scales (100ms for upper limb). These observations raise the important question of how the nervous system processes multiple sources of sensory feedback in such short time intervals, given that temporal delays across sensory systems such as vision and proprioception differ by tens of milliseconds. Here we show that during feedback control, healthy humans use dynamic estimates of hand motion that rely almost exclusively on limb afferent feedback even when visual information about limb motion is available. We demonstrate that such reliance on the fastest sensory signal during movement is compatible with dynamic Bayesian estimation. These results suggest that the nervous system considers not only sensory variances but also temporal delays to perform optimal multisensory integration and feedback control in real-time. |
Deborah A. Cronin; James R. Brockmole Evaluating the influence of a fixated object's spatio-temporal properties on gaze control Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 4, pp. 996–1003, 2016. @article{Cronin2016, Despite recent progress in understanding the factors that determine where an observer will eventually look in a scene, we know very little about what determines how an observer decides where he or she will look next. We investi- gated the potential roles of object-level representations in the direction ofsubsequent shifts ofgaze. In five experiments, we considered whether a fixated object's spatial orientation, im- plied motion, and perceived animacy affect gaze direction when shifting overt attention to another object. Eye move- ments directed away from a fixated object were biased in the direction it faced. This effect was not modified by implying a particular direction ofinanimate or animate motion. Together, these results suggest that decisions regarding where one should look next are in part determined by the spatial, but not by the implied temporal, properties of the object at the current locus of fixation. WABBLE |
Yuwei Cui; Liu D. Liu; James M. McFarland; Christopher C. Pack; Daniel A. Butts Inferring cortical variability from local field potentials Journal Article In: Journal of Neuroscience, vol. 36, no. 14, pp. 4121–4135, 2016. @article{Cui2016, The responses of sensory neurons can be quite different to repeated presentations of the same stimulus. Here, we demonstrate a direct link between the trial-to-trial variability of cortical neuron responses and network activity that is reflected in local field potentials (LFPs). Spikes and LFPs were recorded with a multielectrode array from the middle temporal (MT) area of the visual cortex of macaques during the presentation of continuous optic flow stimuli. A maximum likelihood-based modeling framework was used to predict single-neuron spiking responses using the stimulus, the LFPs, and the activity of other recorded neurons. MT neuron responses were strongly linked to gamma oscillations (maximum at 40 Hz) as well as to lower-frequency delta oscillations (1-4 Hz), with consistent phase preferences across neurons. The predicted modulation associated with the LFP was largely complementary to that driven by visual stimulation, as well as the activity of other neurons, and accounted for nearly half of the trial-to-trial variability in the spiking responses. Moreover, the LFP model predictions accurately captured the temporal structure of noise correlations between pairs of simultaneously recorded neurons, and explained the variation in correlation magnitudes observed across the population. These results therefore identify signatures of network activity related to the variability of cortical neuron responses, and suggest their central role in sensory cortical function. |
Evan T. Curtis; Matthew G. Huebner; Jo-Anne LeFevre The relationship between problem size and fixation patterns during addition, subtraction, multiplication, and division Journal Article In: Journal of Numerical Cognition, vol. 2, no. 2, pp. 91–115, 2016. @article{Curtis2016, Eye-tracking methods have only rarely been used to examine the online cognitive processing that occurs during mental arithmetic on simple arithmetic problems, that is, addition and multiplication problems with single-digit operands (e.g., operands 2 through 9; 2 + 3, 6 x 8) and the inverse subtraction and division problems (e.g., 5 – 3; 48 ÷ 6). Participants (N = 109) solved arithmetic problems from one of the four operations while their eye movements were recorded. We found three unique fixation patterns. During addition and multiplication, participants allocated half of their fixations to the operator and one-quarter to each operand, independent of problem size. The pattern was similar on small subtraction and division problems. However, on large subtraction problems, fixations were distributed approximately evenly across the three stimulus components. On large division problems, over half of the fixations occurred on the left operand, with the rest distributed between the operation sign and the right operand. We discuss the relations between these eye tracking patterns and other research on the differences in processing across arithmetic operations. |
Steven C. Dakin; Philip R. K. Turnbull Similar contrast sensitivity functions measured using psychophysics and optokinetic nystagmus Journal Article In: Scientific Reports, vol. 6, pp. 34514, 2016. @article{Dakin2016, Although the contrast sensitivity function (CSF) is a particularly useful way of characterising functional vision, its measurement relies on observers making reliable perceptual reports. Such procedures can be challenging when testing children. Here we describe a system for measuring the CSF using an automated analysis of optokinetic nystagmus (OKN); an involuntary oscillatory eye movement made in response to drifting stimuli, here spatial-frequency (SF) band-pass noise. Quantifying the strength of OKN in the stimulus direction allows us to estimate contrast sensitivity across a range of SFs. We compared the CSFs of 30 observers with normal vision measured using both OKN and perceptual report. The approaches yield near-identical CSFs (mean R = 0.95) that capture subtle intra-observer variations in visual acuity and contrast sensitivity (both R = 0.84, p < 0.0001). Trial-by-trial analysis reveals high correlation between OKN and perceptual report, a signature of a common neural mechanism for determining stimulus direction. We also observe conditions where OKN and report are significantly decorrelated as a result of a minority of observers experiencing direction-reversals that are not reflected by OKN. We conclude that there are a wide range of stimulus conditions for which OKN can provide a valid alternative means of measuring of the CSF. |
Olga Dal Monte; Matthew Piva; Jason A. Morris; Steve W. C. Chang Live interaction distinctively shapes social gaze dynamics in rhesus macaques Journal Article In: Journal of Neurophysiology, vol. 116, no. 4, pp. 1626–1643, 2016. @article{DalMonte2016, The dynamic interaction of gaze between individuals is a hallmark of social cognition. However, very few studies have examined social gaze dynamics after mutual eye contact during real-time interactions. We used a highly quantifiable paradigm to assess social gaze dynamics between pairs of monkeys and modeled these dynamics using an exponential decay function to investigate sustained attention after mutual eye contact. When mon- keys were interacting with real partners compared with static images and movies of the same monkeys, we found a significant increase in the proportion of fixations to the eyes and a smaller dispersion of fixations around the eyes, indicating enhanced focal attention to the eye region. Notably, dominance and familiarity between the interact- ing pairs induced separable components of gaze dynamics that were unique to live interactions. Gaze dynamics of dominant monkeys after mutual eye contact were associated with a greater number of fixations to the eyes, whereas those of familiar pairs were associated with a faster rate of decrease in this eye-directed attention. Our findings endorse the notion that certain key aspects of social cognition are only captured during interactive social contexts and dependent on the elapsed time relative to socially meaningful events. |
Mario Dalmaso; S. Gareth Edwards; Andrew P. Bayliss Re-encountering individuals who previously engaged in joint gaze modulates subsequent gaze cueing Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 2, pp. 271–284, 2016. @article{Dalmaso2016, We assessed the extent to which previous experience of joint gaze with people (i.e., looking toward the same object) modulates later gaze cueing of attention elicited by those individuals. Participants in Experiments 1 and 2a/b first completed a saccade/antisaccade task while a to-be-ignored face either looked at, or away from, the participants' eye movement target. Two faces always engaged in joint gaze with the participant, whereas 2 other faces never engaged in joint gaze. Then, we assessed standard gaze cueing in response to these faces to ascertain the effect of these prior interactions on subsequent social attention episodes. In Experiment 1, the face's eyes moved before the participant's target appeared, meaning that the participant always gaze-followed 2 faces and never gaze-followed 2 other faces. We found that this prior experience modulated the timecourse of subsequent gaze cueing. In Experiments 2a/b, the participant looked at the target first, then was either followed (i.e., the participant initiated joint gaze), or was not followed. These participants then showed an overall decrement of gaze cueing with individuals who had previously followed participants' eyes (Experiment 2a), an effect that was associated with autism spectrum quotient scores and modulated perceived trustworthiness of the faces (Experiment 2b). Experiment 3 demonstrated that these modulations are unlikely to be because of the association of different levels of task difficulty with particular faces. These findings suggest that establishing joint gaze with others influences subsequent social attention processes that are generally thought to be relatively insensitive to learning from prior episodes. |
Rong-Fuh Day; Peng-Yeng Yin; Yu-Chi Wang; Ching-Hui Chao A new hybrid multi-start tabu search for finding hidden purchase decision strategies in WWW based on eye-movements Journal Article In: Applied Soft Computing, vol. 48, pp. 217–229, 2016. @article{Day2016, It is known that the decision strategy performed by a subject is implicit in his/her external behaviors. Eye movement is one of the observable external behaviors when humans are performing decision activities. Due to the dramatic increase of e-commerce volume on WWW, it is beneficial for the companies to know where the customers focus their attention on the webpage in deciding to make a purchase. This study proposes a new hybrid multi-start tabu search (HMTS) algorithm for finding the hidden decision strategies by clustering the eye-movement data obtained during the decision activities. The HMTS uses adaptive memory and employs both multi-start and local search strategies. An empirical dataset containing 294 eye-fixation sequences and a synthetic dataset consisting of 360 sequences were experimented with. We conduct the Sign test and the result shows that the proposed HMTS method significantly outperforms its variants which implement just one strategy, and the HMTS algorithm shows an improvement over genetic algorithm, particle swarm optimization, and K-means, with a level of significance α = 0.01. The scalability and robustness of the HMTS is validated through a series of statistical tests. |
Trevor Brothers; Matthew J. Traxler Anticipating syntax during reading: Evidence from the boundary change paradigm Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 12, pp. 1894–1906, 2016. @article{Brothers2016, Previous evidence suggests that grammatical constraints have a rapid influence during language com- prehension, particularly at the level of word categories (noun, verb, preposition). These findings are in conflict with a recent study from Angele, Laishley, Rayner, and Liversedge (2014), in which sentential fit had no early influence on word skipping rates during reading. In the present study, we used a gaze-contingent boundary change paradigm to manipulate the syntactic congruity of an upcoming noun or verb outside of participants' awareness. Across 3 experiments (total N ⫽ 148), we observed higher skipping rates for syntactically valid previews (The admiral would not confess . . .), when compared with violation previews (The admiral would not surgeon . . .). Readers were less likely to skip an ungram- matical continuation, even when that word was repeated within the same sentence (The admiral would not admiral . . .), suggesting that word-class constraints can take precedence over lexical repetition effects. To our knowledge, these results provide the first evidence for an influence of syntactic context during parafoveal word recognition. On the basis of the early time-course of this effect, we argue that readers can use grammatical constraints to generate syntactic expectations for upcoming words. |
Susanne Brouwer; Ann R. Bradlow The temporal dynamics of spoken word recognition in adverse listening conditions Journal Article In: Journal of Psycholinguistic Research, vol. 45, no. 5, pp. 1151–1160, 2016. @article{Brouwer2016, This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. candle), an onset competitor (e.g. candy), a rhyme competitor (e.g. sandal), and an unrelated distractor (e.g. lemon). Target words were presented in quiet, mixed with broadband noise, or mixed with background speech. Results showed that lexical competition changes throughout the observation window as a function of what is presented in the background. These findings suggest that, rather than being strictly sequential, stream segregation and lexical competition interact during spoken word recognition. |
Simona Buetti; Alejandro Lleras Distractibility is a function of engagement, not task difficulty: Evidence from a new oculomotor capture paradigm Journal Article In: Journal of Experimental Psychology: General, vol. 145, no. 10, pp. 1382–1405, 2016. @article{Buetti2016, It has been shown that when humans require a brief moment of concentration or mental effort, they tend to avert their gaze away from the attended location (or even blink). Similarly, participants tend to miss unexpected events when they are highly focused on a task. We present an engagement theory of distractibility that is meant to capture the relationship between participants' engagement in a task and reduction in sensitivity to new sensory events in a broad range of situations. In a series of experiments, we asked participants to perform different cognitive tasks of varying degrees of difficulty while we measured spontaneous oculomotor capture by new images that were completely unrelated to the participants' task. The images appeared while participants were cognitively engaged in the task. Our results showed that increased cognitive engagement produced decreased sensitivity to visual events. We propose that individual differences in intrinsic motivation play a large role in determining sensitivity to task unrelated events. In addition, our results also indicate that changes in task difficulty on a trial-to-trial basis do not generate trial-by-trial differences in oculomotor capture. Importantly, we believe our framework provides us with a promising way of extending laboratory findings to many real world situations. |
Tom Bullock; James C. Elliott; John T. Serences; Barry Giesbrecht Acute exercise modulates feature-selective responses in human cortex Journal Article In: Journal of Cognitive Neuroscience, vol. 29, no. 4, pp. 605–618, 2016. @article{Bullock2016, An organism's current behavioral state influences ongoing brain activity. Nonhuman mammalian and invertebrate brains exhibit large increases in the gain of feature-selective neural responses in sensory cortex during locomotion, suggesting that the visual system becomes more sensitive when actively exploring the environment. This raises the possibility that human vision is also more sensitive during active movement. To investigate this possibility, we used an inverted encoding model technique to estimate feature-selective neural response profiles from EEG data acquired from participants performing an orientation discrimination task. Participants (n = 18) fixated at the center of a flickering (15 Hz) circular grating presented at one of nine different orientations and monitored for a brief shift in orientation that occurred on every trial. Participants completed the task while seated on a stationary exercise bike at rest and during low- and high-intensity cycling. We found evidence for inverted-U effects; such that the peak of the reconstructed feature-selective tuning profiles was highest during low-intensity exercise compared with those estimated during rest and high-intensity exercise. When modeled, these effects were driven by changes in the gain of the tuning curve and in the profile bandwidth during low-intensity exercise relative to rest. Thus, despite profound differences in visual pathways across species, these data show that sensitivity in human visual cortex is also enhanced during locomotive behavior. Our results reveal the nature of exercise-induced gain on feature-selective coding in human sensory cortex and provide valuable evidence linking the neural mechanisms of behavior state across species. |
Antimo Buonocore; Robert D. McIntosh; David Melcher Beyond the point of no return: Effects of visual distractors on saccade amplitude and velocity Journal Article In: Journal of Neurophysiology, vol. 115, no. 2, pp. 752–762, 2016. @article{Buonocore2016a, Visual transients, such as a bright flash, reduce the proportion of saccades executed, ∼60–125 ms after flash onset, a phenomenon known as saccadic inhibition (SI). Across three experiments, we apply a similar time-course analysis to the amplitudes and velocities of saccades. Alongside the expected reduction of saccade frequency in the key time period, we report two perturbations of the “main sequence”: one before and one after the period of SI. First, saccades launched between 30 and 70 ms, following the flash, were hypometric, with peak speed exceeding that expected for a saccade of similar amplitude. This finding was in contrast to the common idea that saccades have passed a “point of no return,” ∼60 ms before launching, escaping interference from distractors. The early hypometric saccades observed were not a consequence of spatial averaging between target and distractor locations, as they were found not only following a localized central flash (experiment 1) but also following a spatially generalized flash (experiment 2). Second, across experiments, saccades launched at 110 ms postflash, toward the end of SI, had normal amplitude but a peak speed higher than expected for that amplitude, suggesting increased collicular excitation at the time of launching. Overall, the results show that saccades that escape inhibition following a visual transient are not necessarily unaffected but instead, can reveal interference in spatial and kinematic measures. |
Melanie R. Burke; R. O. Coats Dissociation of the rostral and dorsolateral prefrontal cortex during sequence learning in saccades: A TMS investigation Journal Article In: Experimental Brain Research, vol. 234, no. 2, pp. 597–604, 2016. @article{Burke2016, This experiment sought to find whether differences exist between the dorsolateral prefrontal cortex (DLPFC) and the medial rostral prefrontal cortex (MRPFC) for performing stimulus-independent and stimulus-oriented tasks, respectively. To find a causal relationship in these areas, we employed the use of trans-cranial magnetic stimulation (TMS). Prefrontal areas were stimulated whilst participants performed random or predictable sequence learning tasks at stimulus onset (1st presentation of the sequence only for both Random and Predictable), or during the inter-sequence interval. Overall, we found that during the predictable task a significant decrease in saccade latency, gain and duration was found when compared to the randomised conditions, as expected and observed previously. However, TMS stimulation in DLPFC during the delay in the predictive sequence learning task reduced this predictive ability by delaying the saccadic onset and generating abnormal reductions in saccadic gains during prediction. In contrast, we found that stimulation during a delay in MRPFC reversed the normal effects on peak velocity of the task with the predictive task revealing higher peak velocity than the randomised task. These findings provide causal evidence for independent functions of DLPFC and MRPFC in performing stimulus-independent processing during sequence learning in saccades. |
Anke Cajar; Ralf Engbert; Jochen Laubrock Spatial frequency processing in the central and peripheral visual field during scene viewing Journal Article In: Vision Research, vol. 127, pp. 186–197, 2016. @article{Cajar2016a, Visuospatial attention and gaze control depend on the interaction of foveal and peripheral processing. The foveal and peripheral regions of the visual field are differentially sensitive to parts of the spatial-frequency spectrum. In two experiments, we investigated how the selective attenuation of spatial frequencies in the central or the peripheral visual field affects eye-movement behavior during real-world scene viewing. Gaze-contingent low-pass or high-pass filters with varying filter levels (i.e., cutoff frequencies; Experiment 1) or filter sizes (Experiment 2) were applied. Compared to unfiltered control conditions, mean fixation durations increased most with central high-pass and peripheral low-pass filtering. Increasing filter size prolonged fixation durations with peripheral filtering, but not with central filtering. Increasing filter level prolonged fixation durations with low-pass filtering, but not with high-pass filtering. These effects indicate that fixation durations are not always longer under conditions of increased processing difficulty. Saccade amplitudes largely adapted to processing difficulty: amplitudes increased with central filtering and decreased with peripheral filtering; the effects strengthened with increasing filter size and filter level. In addition, we observed a trade-off between saccade timing and saccadic selection, since saccade amplitudes were modulated when fixation durations were unaffected by the experimental manipulations. We conclude that interactions of perception and gaze control are highly sensitive to experimental manipulations of input images as long as the residual information can still be accessed for gaze control. |
Anke Cajar; Paul Schneeweiss; Ralf Engbert; Jochen Laubrock Coupling of attention and saccades when viewing scenes with central and peripheral degradation Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–19, 2016. @article{Cajar2016, Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations. |
Aurélie Calabrèse; Jean-Baptiste Bernard; Géraldine Faure; Louis Hoffart; Eric Castet Clustering of eye fixations: A new oculomotor determinant of reading speed in maculopathy Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 7, pp. 3192–3202, 2016. @article{Calabrese2016, Purpose: To describe and quantify a largely unnoticed oculomotor pattern that often occurs when patients with central field loss (CFL) read continuous text: Horizontal distribution of eye fixations dramatically varies across sentences and often reveals clusters. Also to statistically analyze the effect of this new factor on reading speed while controlling for the effect of saccadic amplitude (measured in letters per forward saccade, L/FS), an established oculomotor effect. Methods: Quantification of nonuniformity of eye fixations (NUF factor) was based on statistical analysis of the curvature of fixation distributions. Linear mixed-effects analyses were performed to predict reading speed from oculomotor factors based on eye movements of 34 AMD and 4 Stargardt patients (better eye decimal acuity from 0.08 to 0.3). Single-line French sentences were read aloud by these patients, who all had a dense scotoma covering the fovea as assessed with MP1 microperimetry. Results: Nonuniformity of fixations is a strong determinant of reading speed (−0.76 log units; 95% confidence interval [CI] [−0.86, −0.66]). This effect is not confounded with the effect of L/FS. The per sentence proportion of trials with clustering is predicted by the frequency of occurrence of the lowest-frequency word in each sentence. Conclusions: The NUF factor is a new oculomotor predictor of reading speed. This effect is independent of the effect of L/FS. Reading performance, as well as motivation to read, might be enhanced if new visual aids or automatic text simplification were used to reduce the occurrence of fixation clustering. © |
Damien Camors; Yves Trotter; Pierre Pouget; Sophie Gilardeau; Jean-Baptiste Durand Visual straight-ahead preference in saccadic eye movements Journal Article In: Scientific Reports, vol. 6, pp. 23124, 2016. @article{Camors2016, Ocular saccades bringing the gaze toward the straight-ahead direction (centripetal) exhibit higher dynamics than those steering the gaze away (centrifugal). This is generally explained by oculomotor determinants: centripetal saccades are more efficient because they pull the eyes back toward their primary orbital position. However, visual determinants might also be invoked: elements located straight-ahead trigger saccades more efficiently because they receive a privileged visual processing. Here, we addressed this issue by using both pro- and anti-saccade tasks in order to dissociate the centripetal/centrifugal directions of the saccades, from the straight-ahead/eccentric locations of the visual elements triggering those saccades. Twenty participants underwent alternating blocks of pro- and anti-saccades during which eye movements were recorded binocularly at 1 kHz. The results confirm that centripetal saccades are always executed faster than centrifugal ones, irrespective of whether the visual elements have straight-ahead or eccentric locations. However, by contrast, saccades triggered by elements located straight-ahead are consistently initiated more rapidly than those evoked by eccentric elements, irrespective of their centripetal or centrifugal direction. Importantly, this double dissociation reveals that the higher dynamics of centripetal pro-saccades stem from both oculomotor and visual determinants, which act respectively on the execution and initiation of ocular saccades. |
Florence Campana; Ignacio Rebollo; Anne E. Urai; Valentin Wyart; Catherine Tallon-Baudry Conscious vision proceeds from global to local content in goal-directed tasks and spontaneous vision Journal Article In: Journal of Neuroscience, vol. 36, no. 19, pp. 5200–5213, 2016. @article{Campana2016, The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. |
Maria Nella Carminati; Pia Knoeferle Priming younger and older adults' sentence comprehension: Insights from dynamic emotional facial expressions and pupil pize measures Journal Article In: The Open Psychology Journal, vol. 9, no. 1, pp. 129–148, 2016. @article{Carminati2016, Background: Prior visual-world research has demonstrated that emotional priming of spoken sentence processing is rapidly modulated by age. Older and younger participants saw two photographs of a positive and of a negative event side-by-side and listened to a spoken sentence about one of these events. Older adults' fixations to the mentioned (positive) event were enhanced when the still photograph of a previously-inspected positive-valence speaker face was (vs. wasn't) emotionally congruent with the event/sentence. By contrast, the younger adults exhibited such an enhancement with negative stimuli only. Objective: The first aim of the current study was to assess the replicability of these findings with dynamic face stimuli (unfolding from neutral to happy or sad). A second goal was to assess a key prediction made by socio-emotional selectivity theory, viz. that the positivity effect (a preference for positive information) displayed by older adults involves cognitive effort. Method: We conducted an eye-tracking visual-world experiment. Results: Most priming and age effects, including the positivity effects, replicated. However, against our expectations, the positive gaze preference in older adults did not co-vary with a standard measure of cognitive effort - increased pupil dilation. Instead, pupil size was significantly bigger when (both younger and older) adults processed negative than positive stimuli. Conclusion: These findings are in line with previous research on the relationship between positive gaze preferences and pupil dilation. We discuss both theoretical and methodological implications of these results. |
Gareth Carrol; Kathy Conklin; Henrik Gyllstad Found in translation: The Influence of the L1 on the Reading of Idioms in a L2 Journal Article In: Studies in Second Language Acquisition, vol. 38, pp. 403–443, 2016. @article{Carrol2016, Formulaic language represents a challenge to even the most proficient of language learners. Evidence is mixed as to whether native and nonnative speakers process it in a fundamentally different way, whether exposure can lead to more nativelike processing for nonnatives, and how L1 knowledge is used to aid comprehension. In this study we investigated how advanced nonnative speakers process idioms encountered in their L2. We used eye-tracking to see whether a highly proficient group of L1 Swedes showed any evidence of a formulaic processing advantage for English idioms. We also compared translations of Swedish idioms and congruent idioms (items that exist in both languages) to see how L1 knowledge is utilized during online processing. Results support the view that L1 knowledge is automatically used from the earliest stages of processing, regardless of whether sequences are congruent, and that exposure and advanced proficiency can lead to nativelike formulaic processing in the L2. |
Valeria C. Caruso; Daniel S. Pages; Marc A. Sommer; Jennifer M. Groh In: Journal of Neurophysiology, vol. 115, no. 6, pp. 3162–3173, 2016. @article{Caruso2016, Saccadic eye movements can be elic- ited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presenta- tion of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway. |
Carlos R. Cassanello; Sven Ohl; Martin Rolfs Saccadic adaptation to a systematically varying disturbance Journal Article In: Journal of Neurophysiology, vol. 116, no. 2, pp. 336–350, 2016. @article{Cassanello2016, Saccadic adaptation maintains the correct mapping between eye movements and their targets, yet the dynamics of saccadic gain changes in the presence of systematically varying disturbances has not been extensively studied. Here, we assessed changes in the gain of saccade amplitudes induced by continuous and periodic post-saccadic visual feedback. Observers made saccades following a sequence of target steps either along the horizontal meridian (Two-way adaptation) or with unconstrained saccade directions (Global adaptation). An intra-saccadic step-following a sinusoidal variation as a function of the trial number (with three different frequencies tested in separate blocks)-consistently displaced the target along its vector. The oculomotor system responded to the resulting feedback error by modifying saccade amplitudes in a periodic fashion with similar frequency of variation but lagging the disturbance by a few trials. This periodic response was superimposed on a drift towards stronger hypometria with similar asymptotes and decay rates across stimulus conditions. The magnitude of the periodic response decreased with increasing frequency and was smaller and more delayed for Global than Two-way adaptation. These results suggest that-in addition to the well-characterized return-to-the-baseline response observed in protocols using constant visual feedback-the oculomotor system attempts to minimize the feedback error by integrating its variation across trials. This process resembles a convolution with an internal response function, whose structure would be determined by coefficients of the learning model. Our protocol reveals this fast learning process in single short experimental sessions, qualifying it for the study of sensorimotor learning in health and disease. |
Monica S. Castelhano; Richelle L. Witherspoon How you use it matters: Object function guides attention during visual search in scenes Journal Article In: Psychological Science, vol. 27, no. 5, pp. 606–621, 2016. @article{Castelhano2016, How does one know where to look for objects in scenes? Objects are seen in context daily, but also used for specific purposes. Here, we examined whether an objects function can guide attention during visual search in scenes. In Experiment 1, participants studied either the function (function group) or features (feature group) of a set of invented objects. In a subsequent search, the function group located studied objects faster than novel (unstudied) objects, whereas the feature group did not. In Experiment 2, invented objects were positioned in locations that were either congruent or incongruent with the objects functions. Search for studied objects was faster for function-congruent locations and hampered for function-incongruent locations, relative to search for novel objects. These findings demonstrate that knowledge of object function can guide attention in scenes, and they have important implications for theories of visual cognition, cognitive neuroscience, and developmental and ecological psychology. |
Gloria Chamorro; Antonella Sorace; Patrick Sturt What is the source of L1 attrition? the effect of recent L1 re-exposure on Spanish speakers under L1 attrition Journal Article In: Bilingualism: Language and Cognition, vol. 19, no. 3, pp. 520–532, 2016. @article{Chamorro2016, The recent hypothesis that L1 attrition affects the ability to process interface structures but not knowledge representations (Sorace, 2011) is tested by investigating the effects of recent L1 re-exposure on antecedent preferences for Spanish pronominal subjects, using offline judgements and online eye-tracking measures. Participants included a group of native Spanish speakers experiencing L1 attrition ('attriters'), a second group of attriters exposed exclusively to Spanish before they were tested ('re-exposed'), and a control group of Spanish monolinguals. The judgement data shows no significant differences between the groups. Moreover, the monolingual and re-exposed groups are not significantly different from each other in the eye-tracking data. The results of this novel manipulation indicate that attrition effects decrease due to L1 re-exposure, and that bilinguals are sensitive to input changes. Taken together, the findings suggest that attrition affects online sensitivity with interface structures rather than causing a permanent change in speakers' L1 knowledge representations. |
Gloria Chamorro; Patrick Sturt; Antonella Sorace Selectivity in L1 attrition: Differential object marking in Spanish near-native speakers of English Journal Article In: Journal of Psycholinguistic Research, vol. 45, no. 3, pp. 697–715, 2016. @article{Chamorro2016a, Previous research has shown L1 attrition to be restricted to structures at the interfaces between syntax and pragmatics, but not to occur with syntactic properties that do not involve such interfaces ('Interface Hypothesis', Sorace and Filiaci in Anaphora resolution in near-native speakers of Italian. Second Lang Res 22: 339-368, 2006). The present study tested possible L1 attrition effects on a syntax-semantics interface structure [Differential Object Marking (DOM) using the Spanish personal preposition] as well as the effects of recent L1 re-exposure on the potential attrition of these structures, using offline and eye-tracking measures. Participants included a group of native Spanish speakers experiencing attrition ('attriters'), a second group of attriters exposed exclusively to Spanish before they were tested, and a control group of Spanish monolinguals. The eye-tracking results showed very early sensitivity to DOM violations, which was of an equal magnitude across all groups. The off-line results also showed an equal sensitivity across groups. These results reveal that structures involving 'internal' interfaces like the DOM do not undergo attrition either at the processing or representational level. |
Raymond Chang; Alexis T. Baria; Matthew W. Flounders; Biyu J. He Unconsciously elicited perceptual prior Journal Article In: Neuroscience of Consciousness, vol. 2016, no. 1, pp. niw008, 2016. @article{Chang2016, Increasing evidence over the past decade suggests that vision is not simply a passive, feed-forward process in which cortical areas relay progressively more abstract information to those higher up in the visual hierarchy, but rather an inferential process with top-down processes actively guiding and shaping perception. However, one major question that persists is whether such processes can be influenced by unconsciously perceived stimuli. Recent psychophysics and neuroimaging studies have revealed that while consciously perceived stimuli elicit stronger responses in higher visual and frontoparietal areas than those that fail to reach conscious awareness, the latter can still drive high-level brain and behavioral responses. We investigated whether unconscious processing of a masked natural image could facilitate subsequent conscious recognition of its degraded counterpart (a black-and-white "Mooney" image) presented many seconds later. We found that this is indeed the case, suggesting that conscious vision may be influenced by priors established by unconscious processing of a fleeting image. |
Mircea I. Chelaru; Valentin Dragoi Negative correlations in visual cortical networks Journal Article In: Cerebral Cortex, vol. 26, no. 1, pp. 246–256, 2016. @article{Chelaru2016, The amount of information encoded by cortical circuits depends critically on the capacity of nearby neurons to exhibit trial-to-trial (noise) correlations in their responses. Depending on their sign and relationship to signal correlations, noise correlations can either increase or decrease the population code accuracy relative to uncorrelated neuronal firing. Whereas positive noise correlations have been extensively studied using experimental and theoretical tools, the functional role of negative correlations in cortical circuits has remained elusive. We addressed this issue by performing multiple-electrode recording in the superficial layers of the primary visual cortex (V1) of alert monkey. Despite the fact that positive noise correlations decayed exponentially with the difference in the orientation preference between cells, negative correlations were uniformly distributed across the population. Using a statistical model for Fisher Information estimation, we found that a mild increase in negative correlations causes a sharp increase in network accuracy even when mean correlations were held constant. To examine the variables controlling the strength of negative correlations, we implemented a recurrent spiking network model of V1. We found that increasing local inhibition and reducing excitation causes a decrease in the firing rates of neurons while increasing the negative noise correlations, which in turn increase the population signal-to-noise ratio and network accuracy. Altogether, these results contribute to our understanding of the neuronal mechanism involved in the generation of negative correlations and their beneficial impact on cortical circuit function. |
Cheng Chen; Kaibin Jin; Yehua Li; Hongmei Yan The attentional dependence of emotion cognition Is variable with the competing task Journal Article In: Frontiers in Behavioral Neuroscience, vol. 10, pp. 219, 2016. @article{Chen2016, The relationship between emotion and attention has fascinated researchers for decades. Many previous studies have used eye-tracking, ERP, MEG, and fMRI to explore this issue but have reached different conclusions: some researchers hold that emotion cognition is an automatic process and independent of attention, while some others believed that emotion cognition is modulated by attentional resources and is a type of controlled processing. The present research aimed to investigate this controversy, and we hypothesized that the attentional dependence of emotion cognition is variable with the competing task. Eye-tracking technology and a dual-task paradigm were adopted, and subjects' attention was manipulated to fixate at the central task to investigate whether subjects could detect the emotional faces presented in the peripheral area with a decrease or near-absence of attention. The results revealed that when the peripheral task was emotional face discrimination but the central attention-demanding task was different, subjects performed well in the peripheral task, which means that emotional information can be processed in parallel with other stimuli, and there may be a specific channel in the human brain for processing emotional information. However, when the central and peripheral tasks were both emotional face discrimination, subjects could not perform well in the peripheral task, indicating that the processing of emotional information required attentional resources and that it is a type of controlled processing. Therefore, we concluded that the attentional dependence of emotion cognition varied with the competing task. |
Jing Chen; Matteo Valsecchi; Karl R. Gegenfurtner LRP predicts smooth pursuit eye movement onset during the ocular tracking of self-generated movements Journal Article In: Journal of Neurophysiology, vol. 116, no. 1, pp. 18–29, 2016. @article{Chen2016c, Several studies indicated that human observers are very efficient at tracking self-generated hand movements with their gaze, yet it is not clear whether this is simply a byproduct of the predictability of self-generated actions or if it results from a deeper coupling of the somatomotor and oculomotor systems. In a first behavioral experiment we compared pursuit performance as observers either followed their own finger or tracked a dot whose motion was externally generated but mimicked their finger motion. We found that even when the dot motion was completely predictable both in terms of onset time and in terms of kinematics, pursuit was not identical to the one produced as the observers tracked their finger, as evidenced by increased rate of catch-up saccades and by the fact that in the initial phase of the movement gaze was lagging behind the dot, whereas it was ahead of the finger. In a second experiment we recorded EEG in the attempt to find a direct link between the finger motor preparation, indexed by the lateralized readiness potential (LRP), and the latency of smooth pursuit. After taking into account finger movement onset variability, we observed larger LRP amplitudes associated with earlier smooth pursuit onset across trials. The same held across subjects, where average LRP onset correlated with average eye latency. The evidence from both experiments concurs to indicate that a strong coupling exists between the motor systems leading to eye and finger movements and that simple top-down predictive signals are unlikely to support optimal coordination. |
Jing Chen; Matteo Valsecchi; Karl R. Gegenfurtner Role of motor execution in the ocular tracking of self-generated movements Journal Article In: Journal of Neurophysiology, vol. 116, no. 6, pp. 2586–2593, 2016. @article{Chen2016d, When human observers track the movements of their own hand with their gaze, the eyes can start moving before the finger (i.e., anticipatory smooth pursuit). The signals driving anticipation could come from motor commands during finger motor execution or from motor intention and decision processes associated with self-initiated movements. For the present study, we built a mechanical device that could move a visual target either in the same direction as the participant's hand or in the opposite direction. Gaze pursuit of the target showed stronger anticipation if it moved in the same direction as the hand compared with the opposite direction, as evidenced by decreased pursuit latency, increased positional lead of the eye relative to target, increased pursuit gain, decreased saccade rate, and decreased delay at the movement reversal. Some degree of anticipation occurred for incongruent pursuit, indicating that there is a role for higher-level movement prediction in pursuit anticipation. The fact that anticipation was larger when target and finger moved in the same direction provides evidence for a direct coupling between finger and eye motor commands. |
Ming Chen; Peichao Li; Shude Zhu; Chao Han; Haoran Xu; Yang Fang; Jiaming Hu; Anna W. Roe; Haidong D. Lu An orientation map for motion boundaries in macaque V2 Journal Article In: Cerebral Cortex, vol. 26, no. 1, pp. 279–287, 2016. @article{Chen2016e, The ability to extract the shape of moving objects is fundamental to visual perception. However, where such computations are processed in the visual system is unknown. To address this question, we used intrinsic signal optical imaging in awake monkeys to examine cortical response to perceptual contours defined by motion contrast (motion boundaries, MBs). We found that MB stimuli elicit a robust orientation response in area V2. Orientation maps derived from subtraction of orthogonal MB stimuli aligned well with the orientation maps obtained with luminance gratings (LGs). In contrast, area V1 responded well to LGs, but exhibited a much weaker orientation response to MBs. We further show that V2 direction domains respond to motion contrast, which is required in the detection of MB in V2. These results suggest that V2 represents MB information, an important prerequisite for shape recognition and figure-ground segregation. |
Helen E. Clark; John A. Perrone; Robert B. Isler; Samuel G. Charlton The role of eye movements in the size-speed illusion of approaching trains Journal Article In: Accident Analysis and Prevention, vol. 86, pp. 146–154, 2016. @article{Clark2016, Recent research on the perceived speed of large moving objects, compared to smaller moving objects, has revealed the presence of a size-speed illusion. This illusion, where a large object seems to be moving more slowly than a small object travelling at the same speed may account for collisions between motor cars and trains at level crossings, which is a serious safety issue in New Zealand and worldwide. One possible reason for the perceived size-speed difference may be related to the movement of our eyes when we track moving vehicles. In order to investigate this, we tested observers' relative speed perception of moving objects (both abstract and more detailed objects) moving in depth towards the observer, presented on a computer display and eye movements recorded with an eyetracker. Experiment 1 confirmed first the size-speed illusion when the observers were situated further away (18, 36 m) from the simulated rail crossing or intersection. It also revealed that the eye movement behaviour of our participants was different when they judged the speeds of the small and large objects; eye fixations were localised around the visual centroid of longer objects and hence were further from the front of the moving large objects than the smaller ones. Experiment 2 found that manipulating eye movements could reduce the magnitude of the illusion. When observers tracked targets (dots) that were placed at corresponding locations at the front of the small object and the long object respectively, they perceived the speeds of the two objects as equal. When target dots were placed closer to the visual centroid, observers perceived the larger object to be moving slower. These results demonstrate that there is a close relationship between eye movement behaviour and our perceived judgement of an approaching train's speed. |
Alasdair D. F. Clarke; Patrick Green; Mike J. Chantler; Amelia R. Hunt Human search for a target on a textured background is consistent with a stochastic model Journal Article In: Journal of Vision, vol. 16, no. 7, pp. 1–16, 2016. @article{Clarke2016a, Previous work has demonstrated that search for a target in noise is consistent with the predictions of the optimal search strategy, both in the spatial distribution of fixation locations and in the number of fixations observers require to find the target. In this study we describe a challenging visual-search task and compare the number of fixations required by human observers to find the target to predictions made by a stochastic search model. This model relies on a target-visibility map based on human performance in a separate detection task. If the model does not detect the target, then it selects the next saccade by randomly sampling from the distribution of saccades that human observers made. We find that a memoryless stochastic model matches human performance in this task. Furthermore, we find that the similarity in the distribution of fixation locations between human observers and the ideal observer does not replicate: Rather than making the signature doughnut-shaped distribution predicted by the ideal search strategy, the fixations made by observers are best described by a central bias. We conclude that, when searching for a target in noise, humans use an essentially random strategy, which achieves near optimal behavior due to biases in the distributions of saccades we have a tendency to make. The findings reconcile the existence of highly efficient human search performance with recent studies demonstrating clear failures of optimality in single and multiple saccade tasks. |
Alasdair D. F. Clarke; Amelia R. Hunt Failure of intuition when choosing whether to invest in a single goal or split resources between two goals Journal Article In: Psychological Science, vol. 27, no. 1, pp. 64–74, 2016. @article{Clarke2016, In a series of related experiments, we asked people to choose whether to split their attention between two equally likely potential tasks or to prioritize one task at the expense of the other. In such a choice, when the tasks are easy, the best strategy is to prepare for both of them. As difficulty increases beyond the point at which people can perform both tasks accurately, they should switch strategy and focus on one task at the expense of the other. Across three very different tasks (target detection, throwing, and memory), none of the participants switched their strategy at the correct point. Moreover, the majority consistently failed to modify their strategy in response to changes in task difficulty. This failure may have been related to uncertainty about their own ability, because in a version of the experiment in which there was no uncertainty, participants uniformly switched at an optimal point. |
Claudia Classen; Armin Kibele Action induction by visual perception of rotational motion Journal Article In: Psychological Research, vol. 80, no. 5, pp. 785–804, 2016. @article{Classen2016, A basic process in the planning of everyday actions involves the integration of visually perceived movement characteristics. Such processes of information integration often occur automatically. The aim of the present study was to examine whether the visual perception of spatial characteristics of a rotational motion (rotation direction) can induce a spatially compatible action. Four reaction time experiments were conducted to analyze the effect of perceiving task irrelevant rotational motions of simple geometric figures as well as of gymnasts on a horizontal bar while responding to color changes in these objects. The results show that the participants react faster when the directional information of a rotational motion is compatible with the spatial characteristics of an intended action. The degree of complexity of the perceived event does not play a role in this effect. The spatial features of the used biological motion were salient enough to elicit a motion based Simon effect. However, in the cognitive processing of the visual stimulus, the critical criterion is not the direction of rotation, but rather the relative direction of motion (direction of motion above or below the center of rotation). Nevertheless, this conclusion is tainted with reservations since it is only fully supported by the response behavior of female participants. |
Charles Clifton; Lyn Frazier Accommodation to an unlikely episodic state Journal Article In: Journal of Memory and Language, vol. 86, pp. 20–34, 2016. @article{Clifton2016, Mini-discourses like (ia) seem slightly odd compared to their counterparts containing a conjunction (ib).(i)a.Speaker A:John or Bill left.Speaker B:Sam did too.b.Speaker A:John and Bill left.Speaker B:Sam did too.One possibility is that or in Speaker A's utterance in (ia) raises the potential Question Under Discussion (QUD) whether it was John or Bill who left and Speaker B's reply fails to address this QUD. A different possibility is that the epistemic state of the speaker of (ia) is somewhat unlikely or uneven: the speaker knows that someone left, and that it was John or Bill, but doesn't know which one. The results of four acceptability judgment studies confirmed that (ia) is less good or coherent than (ib) (Experiment 1), but not due to failure to address the QUD implicitly introduced by the disjunction because the penalty for disjunction persisted even in the presence of a different overt QUD (Experiment 2) and even when there was no reply to Speaker A (Experiment 3). The hypothesis that accommodating an unusual epistemic state might underlie the lower acceptability of disjunction was supported by the fact that the disjunction penalty is larger in past tense discourses than in future discourses, where partial knowledge of events is the norm (Experiment 4). The results of an eye tracking study revealed a penalty for disjunction relative to conjunction that was significantly smaller when a lead in (. I wonder if it was. . .) explicitly introduced the disjunction. This interaction (connective X lead in) appeared in early measures on the disjunctive phrase itself, suggesting that the input is related to an inferred epistemic state of the speaker in a rapid and ongoing fashion. |
Virginia Clinton; Kinga Morsanyi; Martha W. Alibali; Mitchell J. Nathan Learning about probability from text and tables: Do color coding and labeling through an interactive-user interface help? Journal Article In: Applied Cognitive Psychology, vol. 30, no. 3, pp. 440–453, 2016. @article{Clinton2016, Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly assigned to learn about probabilistic reasoning from one of 4 computer-based lessons generated from a 2 (color coding/no color coding) by 2 (labeling/no labeling) between-subjects design. Learners added the labels or color coding at their own pace by clicking buttons in a computer-based lesson. Participants' eye movements were recorded while viewing the lesson. Labeling was beneficial for learning, but color coding was not. In addition, labeling, but not color coding, increased attention to important information in the table and time with the lesson. Both labeling and color coding increased looks between the text and corresponding information in the table. The findings provide support for the multimedia principle, and they suggest that providing labeling enhances learning about probabilistic reasoning from text and tables. |
Moreno I. Coco; Frank Keller; George L. Malcolm Anticipation in real-world scenes: The role of visual context and visual memory Journal Article In: Cognitive Science, vol. 40, no. 8, pp. 1995–2024, 2016. @article{Coco2016, The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. |
Thérèse Collins The spatiotopic representation of visual objects across time Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 6, pp. 1531–1537, 2016. @article{Collins2016, Each eye movement introduces changes in the retinal location of objects. How a stable spatiotopic representation emerges from such variable input is an important question for the study of vision. Researchers have classically probed human observers' performance in a task requiring a location judgment about an object presented at different locations across a saccade. Correct performance on this task requires realigning or remapping retinal locations to compensate for the saccade. A recent study showed that performance improved with longer presaccadic viewing time, suggesting that accurate spatiotopic representations take time to build up. The first goal of the study was to replicate that finding. Two experiments, one an exact replication and the second a modified version, failed to replicate improved performance with longer presaccadic viewing time. The second goal of this study was to examine the role of attention in constructing spatiotopic representations, as theoretical and neurophysiological accounts of remapping have proposed that only attended targets are remapped. A third experiment thus manipulated attention with a spatial cueing paradigm and compared transsaccadic location performance of attended versus unattended targets. No difference in spatiotopic performance was found between attended and unattended targets. Although only negative results are reported, they might nevertheless suggest that spatiotopic representations are relatively stable over time. |
Kathy Conklin; Ana Pellicer-Sánchez Using eye-tracking in applied linguistics and second language research Journal Article In: Second Language Research, vol. 32, no. 3, pp. 453–467, 2016. @article{Conklin2016, With eye-tracking technology the eye is thought to give researchers a window into the mind. Importantly, eye-tracking has significant advantages over traditional online processing measures: chiefly that it allows for more ‘natural' processing as it does not require a secondary task, and that it provides a very rich moment-to-moment data source. In recognition of the technology's benefits, an ever increasing number of researchers in applied linguistics and second language research are beginning to use it. As eye-tracking gains traction in the field, it is important to ensure that it is established in an empirically sound fashion. To do this it is important for the field to come to an understanding about what eye-tracking is, what eye-tracking measures tell us, what it can be used for, and what different eye-tracking systems can and cannot do. Further, it is important to establish guidelines for designing sound research studies using the technology. The goal of the current review is to begin to address these issues. |
Amanda J. Connolly; Nicole J. Rinehart; Joanne Fielding Saccade adaptation in young people diagnosed with Attention Deficit Hyperactivity Disorder Combined Type Journal Article In: Neuroscience, vol. 333, pp. 27–34, 2016. @article{Connolly2016, Growing evidence suggests Attention Deficit Hyperactivity Disorder (ADHD) often co-occurs with Autism Spectrum Disorder (ASD), and a better understanding of the nature of their overlap, including at a neurobiological level, is needed. Research has implicated cerebellar-networks as part of the neural-circuitry disrupted in ASD, but little research has been carried out to investigate this in ADHD. We investigated cerebellar integrity using a double-step saccade adaptation paradigm in a group of male children age 8–15 (n = 12) diagnosed with ADHD-Combined Type (-CT). Their performance was compared to a group of age and IQ-matched typically developing (TD) controls (n = 12). Parent reported symptoms of ADHD-CT and ASD were measured, along with motor proficiency (Movement ABC-2). We found, on average, the adaptation of saccade gain was reduced for the ADHD-CT group compared to the TD group. Greater saccadic gain change (adaptation) was also positively correlated with higher Movement ABC-2 total and balance scores among the ADHD-CT participants. These differences suggest cerebellar networks underlying saccade adaptation may be disrupted in young people with ADHD-CT. Though our findings require further replication with larger samples, they suggest further research into cerebellar dysfunction in ADHD-CT, and as a point of neurobiological overlap with ASD, may be warranted. |
Amanda J. Connolly; Nicole J. Rinehart; Beth P. Johnson; Nicole Papadopoulos; Joanne Fielding In: Neuroscience, vol. 334, pp. 47–54, 2016. @article{Connolly2016a, Although there is little overlap in core diagnostic criteria for ADHD and Autism Spectrum Disorder (ASD), ASD symptoms are estimated to co-occur in children with ADHD in 20–50% of cases. As motor control deficits are common to both disorders, we investigated the impact of ASD symptoms on ocular motor control in children with Attention Deficit Hyperactivity Disorder-Combined Type (ADHD-CT), using a cued saccade paradigm sensitive to cerebellar ocular motor impairment in ASD. Basic saccade metrics (latency, velocity and accuracy), trial-to-trial variability, and main sequences relationships (saccade velocity for a given amplitude) were assessed, for 14 males with ADHD-CT and 14 typically developing (TD) males (aged 8–14, IQ > 80). Our results revealed that saccade profiles of the ADHD-CT group showed a pattern of hypermetria and altered main sequence. As the cerebellum is crucially involved in the regulation of saccade parameters, we propose that this pattern of deficit in ADHD-CT is consistent with the widely reported morphological abnormalities in ocular motor vermis (cerebellar lobules VI-VII) in ADHD-CT and ASD. |
Hui Kou; Yanhua Su; Taiyong Bi; Xiao Gao; Hong Chen Attentional biases toward face-related stimuli among face dissatisfied women: Orienting and maintenance of attention revealed by eye-movement Journal Article In: Frontiers in Psychology, vol. 7, pp. 919, 2016. @article{Kou2016, The present study was aimed to examine attentional biases toward attractive and unattractive faces among face dissatisfied women. Twenty-seven women with high face dissatisfaction (HFD) and 27 women with low face dissatisfaction (LFD) completed a visual dot-probe task while their eye-movements were tracking. Under the condition of faces-neutral stimuli (vases) pairs, compared to LFD women, HFD women directed their first fixations more often toward faces, directed their first fixations toward unattractive faces more quickly, and had longer first fixation duration on such faces. All participants had longer overall gaze duration on attractive faces than on unattractive ones. Our behavioral data revealed that HFD women had difficulty in disengaging their attention from faces. However, there are no group differences in stimulus pairs containing an attractive and an unattractive face. In sum, when faces were paired with neutral stimuli (vases) HFD women showed an attention pattern characterized by orienting and maintenance, at least initially, toward unattractive faces but showed overall attention maintenance to attractive ones, but any attention bias wasn't found in attractive - unattractive face pairs. |
Ivan Koychev; Dan Joyce; E. Barkus; Ulrich Ettinger; Anne Schmechtig; Colin T. Dourish; G. R. Dawson; Kevin J. Craig; J. F. William Deakin Cognitive and oculomotor performance in subjects with low and high schizotypy: Implications for translational drug development studies Journal Article In: Translational Psychiatry, vol. 6, pp. e811, 2016. @article{Koychev2016, The development of drugs to improve cognition in patients with schizophrenia is a major unmet clinical need. A number of promising compounds failed in recent clinical trials, a pattern linked to poor translation between preclinical and clinical stages of drug development. Seeking proof of efficacy in early Phase 1 studies in surrogate patient populations (for example, high schizotypy individuals where subtle cognitive impairment is present) has been suggested as a strategy to reduce attrition in the later stages of drug development. However, there is little agreement regarding the pattern of distribution of schizotypal features in the general population, creating uncertainty regarding the optimal control group that should be included in prospective trials. We aimed to address this question by comparing the performance of groups derived from the general population with low, average and high schizotypy scores over a range of cognitive and oculomotor tasks. We found that tasks dependent on frontal inhibitory mechanisms (N-Back working memory and anti-saccade oculomotor tasks), as well as a smooth-pursuit oculomotor task were sensitive to differences in the schizotypy phenotype. In these tasks the cognitive performance of 'low schizotypes' was significantly different from 'high schizotypes' with 'average schizotypes' having an intermediate performance. These results indicate that for evaluating putative cognition enhancers for treating schizophrenia in early-drug development studies the maximum schizotypy effect would be achieved using a design that compares low and high schizotypes. |
Sarah C. Krall; Lukas J. Volz; Eileen Oberwelland; Christian Grefkes; Gereon R. Fink; Kerstin Konrad The right temporoparietal junction in attention and social interaction: A transcranial magnetic stimulation study Journal Article In: Human Brain Mapping, vol. 37, no. 2, pp. 796–807, 2016. @article{Krall2016, The right temporoparietal junction (rTPJ) has been associated with the ability to reorient attention to unexpected stimuli and the capacity to understand others' mental states (theory of mind [ToM]/false belief). Using activation likelihood estimation meta-analysis we previously unraveled that the anterior rTPJ is involved in both, reorienting of attention and ToM, possibly indicating a more general role in attention shifting. Here, we used neuronavigated transcranial magnetic stimulation to directly probe the role of the rTPJ across attentional reorienting and false belief. Task performance in a visual cueing paradigm and false belief cartoon task was investigated after application of continuous theta burst stimulation (cTBS) over anterior rTPJ (versus vertex, for control). We found that attentional reorienting was significantly impaired after rTPJ cTBS compared with control. For the false belief task, error rates in trials demanding a shift in mental state significantly increased. Of note, a significant positive correlation indicated a close relation between the stimulation effect on attentional reorienting and false belief trials. Our findings extend previous neuroimaging evidence by indicating an essential overarching role of the anterior rTPJ for both cognitive functions, reorienting of attention and ToM. |
Rebecca M. Krock; Tirin Moore Visual sensitivity of frontal eye field neurons during the preparation of saccadic eye movements Journal Article In: Journal of Neurophysiology, vol. 116, no. 6, pp. 2882–2891, 2016. @article{Krock2016, Primate vision is continuously disrupted by saccadic eye movements, and yet this disruption goes unperceived. One mechanism thought to reduce perception of this self-generated movement is saccadic suppression, a global loss of visual sensitivity just before, during, and after saccadic eye movements. The frontal eye field (FEF) is a candidate source of neural correlates of saccadic suppression previously observed in visual cortex, because it contributes to the generation of visually guided saccades and modulates visual cortical responses. However, whether the FEF exhibits a perisaccadic reduction in visual sensitivity that could be transmitted to visual cortex is unknown. To determine whether the FEF exhibits a signature of saccadic suppression, we recorded the visual responses of FEF neurons to brief, full-field visual probe stimuli presented during fixation and before onset of saccades directed away from the receptive field in rhesus macaques (Macaca mulatta). We measured visual sensitivity during both epochs and found that it declines before saccade onset. Visual sensitivity was significantly reduced in visual but not visuomotor neurons. This reduced sensitivity was also present in visual neurons with no movement-related modulation during visually guided saccades and thus occurred independently from movement-related activity. Across the population of visual neurons, sensitivity began declining similar to 80 ms before saccade onset. We also observed a similar presaccadic reduction in sensitivity to isoluminant, chromatic stimuli. Our results demonstrate that the signaling of visual information by FEF neurons is reduced during saccade preparation, and thus these neurons exhibit a signature of saccadic suppression. |
Hannah M. Krüger; Thérèse Collins; Bernhard Englitz; Patrick Cavanagh Saccades create similar mislocalizations in visual and auditory space Journal Article In: Journal of Neurophysiology, vol. 115, no. 4, pp. 2237–2245, 2016. @article{Krueger2016, Orienting our eyes to a light, a sound, or a touch occurs effortlessly, despite the fact that sound and touch have to be converted from head- and body-based coordinates to eye-based coordinates to do so. We asked whether the oculomotor representation is also used for localization of sounds even when there is no saccade to the sound source. To address this, we examined whether saccades introduced similar errors of localization judgments for both visual and auditory stimuli. Sixteen subjects indicated the direction of a visual or auditory apparent motion seen or heard between two targets presented either during fixation or straddling a saccade. Compared with the fixation baseline, saccades introduced errors in direction judgments for both visual and auditory stimuli: in both cases, apparent motion judgments were biased in direction of the saccade. These saccade-induced effects across modalities give rise to the possibility of shared, cross-modal location coding for perception and action. |
Wouter Kruijne; Martijn Meeter Implicit short- and long-term memory direct our gaze in visual search Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 3, pp. 761–773, 2016. @article{Kruijne2016, Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Simi- larly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing. |
Gustav Kuhn; Robert Teszka; Natalia Tenaw; Alan Kingstone In: Cognition, vol. 146, pp. 136–142, 2016. @article{Kuhn2016a, People's attention is oriented towards faces, but the extent to which these social attention effects are under top down control is more ambiguous. Our first aim was to measure and compare, in real life and in the lab, people's top-down control over overt and covert shifts in reflexive social attention to the face of another. We employed a magic trick in which the magician used social cues (i.e. asking a question whilst establishing eye contact) to misdirect attention towards his face, and thus preventing participants from noticing a visible colour change to a playing card. Our results show that overall people spend more time looking at the magician's face when he is seen on video than in reality. Additionally, although most participants looked at the magician's face when misdirected, this tendency to look at the face was modulated by instruction (i.e., "keep your attention on the cards"), and therefore, by top down control. Moreover, while the card's colour change was fully visible, the majority of participants failed to notice the change, and critically, change detection (our measure of covert attention) was not affected by where people looked (overt attention). We conclude that there is a tendency to shift overt and covert attention reflexively to faces, but that people exert more top down control over this overt shift in attention. These finding are discussed within a new framework that focuses on the role of eye movements as an attentional process as well as a form of non-verbal communication. |
Anuenue Kukona; David Braze; Clinton L. Johns; W. Einar Mencl; Julie A. Van Dyke; James S. Magnuson; Kenneth R. Pugh; Donald P. Shankweiler; Whitney Tabor The real-time prediction and inhibition of linguistic outcomes: Effects of language and literacy skill Journal Article In: Acta Psychologica, vol. 171, pp. 72–84, 2016. @article{Kukona2016, Recent studies have found considerable individual variation in language comprehenders' predictive behaviors, as revealed by their anticipatory eye movements during language comprehension. The current study investigated the relationship between these predictive behaviors and the language and literacy skills of a diverse, community-based sample of young adults. We found that rapid automatized naming (RAN) was a key determinant of comprehenders' prediction ability (e.g., as reflected in predictive eye movements to a WHITE CAKE on hearing “The boy will eat the white…”). Simultaneously, comprehension-based measures predicted participants' ability to inhibit eye movements to objects that shared features with predictable referents but were implausible completions (e.g., as reflected in eye movements to a white but inedible WHITE CAR). These findings suggest that the excitatory and inhibitory mechanisms that support prediction during language processing are closely linked with specific cognitive abilities that support literacy. We show that a self-organizing cognitive architecture captures this pattern of results. |
Mark A. LeBoeuf; Jessica M. Choplin; Debra Pogrund Stark Eye see what you are saying: Testing conversational influences on the information gleaned from home-loan disclosure forms Journal Article In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 307–321, 2016. @article{LeBoeuf2016, The federal government mandates the use of home-loan disclosure forms to facilitate understanding of offered loans, enable comparison shopping, and prevent predatory lending. Predatory lending persists, however, and scant research has examined how salespeople might undermine the effectiveness of these forms. Three eye-tracking studies (a laboratory simulation and two controlled experiments) investigated how conversational norms affect the information consumers can glean from these forms. Study 1 was a laboratory simulation that recreated in the laboratory; the effects that previous literature suggested is likely happening in the field, namely, that following or violating conversational norms affects the information that consumers can glean from home-loan disclosure forms and the home-loan decisions they make. Studies 2 and 3 were controlled experiments that isolated the possible factors responsible for the observed biases in the information gleaned from these forms. The results suggest that attentional biases are largely responsible for the effects of conversation on the information consumers get and that perceived importance plays little to no role. Policy implications and how eye-tracking technology can be employed to improve decision-making are considered. |
Noah M. Ledbetter; Charles D. Chen; Ilya E. Monosov Multiple Mechanisms for Processing Reward Uncertainty in the Primate Basal Forebrain Journal Article In: Journal of Neuroscience, vol. 36, no. 30, pp. 7852–7864, 2016. @article{Ledbetter2016, The ability to use information about the uncertainty of future outcomes is critical for adaptive behavior in an uncertain world.Weshow that the basal forebrain (BF) contains at least two distinct neural-coding strategies to support this capacity. The dorsal-lateral BF, including the ventral pallidum (VP), contains reward-sensitive neurons, some of which are selectively suppressed by uncertain-reward predictions (U-). In contrast, the medial BF (mBF) contains reward-sensitive neurons, some of which are selectively enhanced (U+) by uncertain-reward predictions. In a two-alternative choice-task, U- neurons were selectively suppressed while monkeys chose uncertain options over certain options. During the same choice-epoch, U+ neurons signaled the subjective reward value of the choice options. Additionally, after the choice was reported, U+ neurons signaled reward uncertainty until the choice outcome. We suggest that uncertainty-related suppression of VP may participate in the mediation of uncertainty-seeking actions, whereas uncertainty-related enhancement of the mBF may direct cognitive resources to monitor and learn from uncertain-outcomes. |
Helmut Leder; Aleksandra Mitrovic; Jürgen Goller How beauty determines gaze! Facial attractiveness and gaze duration in images of real world scenes Journal Article In: i-Perception, pp. 1–12, 2016. @article{Leder2016, We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces. |
Yen-Ju Lee; Harold H. Greene; Chia W. Tsai; Yu J. Chou Differences in sequential eye movement behavior between Taiwanese and American viewers Journal Article In: Frontiers in Psychology, vol. 7, pp. 697, 2016. @article{Lee2016a, Knowledge of how information is sought in the visual world is useful for predicting and simulating human behavior. Taiwanese participants and American participants were instructed to judge the facial expression of a focal face that was flanked horizontally by other faces while their eye movements were monitored. The Taiwanese participants distributed their eye fixations more widely than American participants, started to look away from the focal face earlier than American participants, and spent a higher percentage of time looking at the flanking faces. Eye movement transition matrices also provided evidence that Taiwanese participants continually, and systematically shifted gaze between focal and flanking faces. Eye movement patterns were less systematic and less prevalent in American participants. This suggests that both cultures utilized different attention allocation strategies. The results highlight the importance of determining sequential eye movement statistics in cross-cultural research on the utilization of visual context. |
Agathe Legrand; Karine Doré-Mazars; Christelle Lemoine; Vincent Nougier; Isabelle Olivier Interference between oculomotor and postural tasks in 7–8-year-old children and adults Journal Article In: Experimental Brain Research, vol. 234, no. 6, pp. 1667–1677, 2016. @article{Legrand2016, Several studies in adults having observed the effect of eye movements on postural control provided contradictory results. In the present study, we explored the effect of various oculomotor tasks on postural control and the effect of different postural tasks on eye movements in eleven children (7.8 ± 0.5 years) and nine adults (30.4 ± 6.3 years). To vary the difficulty of the oculomotor task, three conditions were tested: fixation, prosaccades (reactive saccades made toward the target) and antisaccades (voluntary saccades made in the direction opposite to the visual target). To vary the difficulty of postural control, two postural tasks were tested: Standard Romberg (SR) and Tandem Romberg (TR). Postural difficulty did not affect oculomotor behavior, except by lengthening adults' latencies in the prosaccade task. For both groups, postural control was altered in the antisaccade task as compared to fixation and prosaccade tasks. Moreover, a ceiling effect was found in the more complex postural task. This study highlighted a cortical interference between oculomotor and postural control systems. |
Tsu-Chiang Lei; Shih-Chieh Wu; Chi-Wen Chao; Su-Hsin Lee Evaluating differences in spatial visual attention in wayfinding strategy when using 2D and 3D electronic maps Journal Article In: GeoJournal, vol. 81, no. 2, pp. 153–167, 2016. @article{Lei2016, With the evolution of mapping technology, electronic maps are gradually evolving from traditional 2D formats, and increasingly using a 3D format to represent environmental features. However, these two types of spatial maps might produce different visual attention modes, leading to different spatial wayfinding (or searching) decisions. This study designs a search task for a spatial object to demonstrate whether different types of spatial maps indeed produce different visual attention and decision making. We use eye tracking technology to record the content of visual attention for 44 test subjects with normal eyesight when looking at 2D and 3D maps. The two types of maps have the same scope, but their contents differ in terms of composition, material, and visual observation angle. We use a t test statistical model to analyze differences in indices of eye movement, applying spatial autocorrelation to analyze the aggregation of fixation points and the strength of aggregation. The results show that aside from seek time, there are significant differences between 2D and 3D electronic maps in terms of fixation time and saccade amplitude. This study uses a spatial autocorrelation model to analyze the aggregation of the spatial distribution of fixation points. The results show that in the 2D electronic map the spatial clustering of fixation points occurs in a range of around 12° from the center, and is accompanied by a shorter viewing time and larger saccade amplitude. In the 3D electronic map, the spatial clustering of fixation points occurs in a range of around 9° from the center, and is accompanied by a longer viewing time and smaller saccadic amplitude. The two statistical tests shown above demonstrate that 2D and 3D electronic maps produce different viewing behaviors. The 2D electronic map is more likely to produce fast browsing behavior, which uses rapid eye movements to piece together preliminary information about the overall environment. This enables basic information about the environment to be obtained quickly, but at the cost of the level of detail of the information obtained. However, in the 3D electronic map, more focused browsing occurs. Longer fixations enable the user to gather detailed information from points of interest on the map, and thereby obtain more information about the environment (such as material, color, and depth) and determine the interaction between people and the environment. However, this mode requires a longer viewing time and greater use of directed attention, and therefore may not be conducive to use over a longer period of time. After summarizing the above research findings, the study suggests that future electronic maps can consider combining 2D and 3D modes to simultaneously display electronic map content. Such a mixed viewing mode can provide a more effective viewing interface for human–machine interaction in cyberspace. |
Christelle Lemoine-Lardennois; Nadia Alahyane; Coline Tailhefer; Thérèse Collins; Jacqueline Fagard; Karine Doré-Mazars Saccadic adaptation in 10–41 month-old children Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 241, 2016. @article{LemoineLardennois2016, When saccade amplitude becomes systematically inaccurate, adaptation mechanisms gradually decrease or increase it until accurate saccade targeting is recovered. Adaptive shortening and adaptive lengthening of saccade amplitude rely on separate mechanisms in adults. When these adaptation mechanisms emerge during development is poorly known except that adaptive shortening processes are functional in children above 8 years of age. Yet, saccades in infants are consistently inaccurate (hypometric) as if adaptation mechanisms were not fully functional in early childhood. Here, we tested reactive saccade adaptation in 10–41 month-old children compared to a group of 20–30 year-old adults. A visual target representing a cartoon character appeared at successive and unpredictable locations 10◦ apart on a computer screen. During the eye movement toward the target, it systematically stepped in the direction opposite to the saccade to induce an adaptive shortening of saccade amplitude (Experiment 1). In Experiment 2, the target stepped in the same direction as the ongoing saccade to induce an adaptive lengthening of saccade amplitude. In both backward and forward adaptation experiments, saccade adaptation was compared to a control condition where there was no intrasaccadic target step. Analysis of baseline performance revealed both longer saccade reaction times and hypometric saccades in children compared to adults. In both experiments, children on average showed gradual changes in saccade amplitude consistent with the systematic intrasaccadic target steps. Moreover, the amount of amplitude change was similar between children and adults for both backward and forward adaptation. Finally, adaptation abilities in our child group were not related to age. Overall the results suggest that the neural mechanisms underlying reactive saccade adaptation are in place early during development. |
Karolina M. Lempert; Eli Johnson; Elizabeth A. Phelps Emotional arousal predicts intertemporal choice Journal Article In: Emotion, vol. 16, no. 5, pp. 647–656, 2016. @article{Lempert2016, People generally prefer immediate rewards to rewards received after a delay, often even when the delayed reward is larger. This phenomenon is known as temporal discounting. It has been suggested that preferences for immediate rewards may be due to their being more concrete than delayed rewards. This concreteness may evoke an enhanced emotional response. Indeed, manipulating the representation of a future reward to make it more concrete has been shown to heighten the reward's subjective emotional intensity, making people more likely to choose it. Here the authors use an objective measure of arousal—pupil dilation—to investigate if emotional arousal mediates the influence of delayed reward concreteness on choice. They recorded pupil dilation responses while participants made choices between immediate and delayed rewards. They manipulated concreteness through time interval framing: delayed rewards were presented either with the date on which they would be received (e.g., “$30, May 3”; DATE condition, more concrete) or in terms of delay to receipt (e.g., “$30, 7 days; DAYS condition, less concrete). Contrary to prior work, participants were not overall more patient in the DATE condition. However, there was individual variability in response to time framing, and this variability was predicted by differences in pupil dilation between conditions. Emotional arousal increased as the subjective value of delayed rewards increased, and predicted choice of the delayed reward on each trial. This study advances our understanding of the role of emotion in temporal discounting. |
Mark D. Lescroart; Nancy Kanwisher; Julie D. Golomb No evidence for automatic remapping of stimulus features or location found with fMRI Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 53, 2016. @article{Lescroart2016, The input to our visual system shifts every time we move our eyes. To maintain a stable percept of the world, visual representations must be updated with each saccade. Near the time of a saccade, neurons in several visual areas become sensitive to the regions of visual space that their receptive fields occupy after the saccade. This process, known as remapping, transfers information from one set of neurons to another, and may provide a mechanism for visual stability. However, it is not clear whether remapping transfers information about stimulus features in addition to information about stimulus location. To investigate this issue, we recorded BOLD fMRI responses while human subjects viewed images of faces and houses (two visual categories with many feature differences). Immediately after some image presentations, subjects made a saccade that moved the previously stimulated location to the opposite side of the visual field. We then used a combination of univariate analyses and multivariate pattern analyses to test whether information about stimulus location and stimulus features were remapped to the ipsilateral hemisphere after the saccades. We found no reliable indication of stimulus feature remapping in any region. However, we also found no reliable indication of stimulus location remapping, despite the fact that our paradigm was highly similar to previous fMRI studies of remapping. The absence of location remapping in our study precludes strong conclusions regarding feature remapping. However, these results also suggest that measurement of location remapping with fMRI depends strongly on the details of the experimental paradigm used. We highlight differences in our approach from the original fMRI studies of remapping, discuss potential reasons for the failure to generalize prior location remapping results, and suggest directions for future research. |
Joan López-Moliner; Eli Brenner Flexible timing of eye movements when catching a ball Journal Article In: Journal of Vision, vol. 16, no. 5, pp. 1–11, 2016. @article{LopezMoliner2016, In ball games, one cannot direct ones gaze at the ball all the time because one must also judge other aspects of the game, such as other players' positions. We wanted to know whether there are times at which obtaining information about the ball is particularly beneficial for catching it. We recently found that people could catch successfully if they saw any part of the ball's flight except the very end, when sensory-motor delays make it impossible to use new information. Nevertheless, there may be a preferred time to see the ball. We examined when six catchers would choose to look at the ball if they had to both catch the ball and find out what to do with it while the ball was approaching. A catcher and a thrower continuously threw a ball back and forth. We recorded their hand movements, the catcher's eye movements, and the ball's path. While the ball was approaching the catcher, information was provided on a screen about how the catcher should throw the ball back to the thrower (its peak height). This information disappeared just before the catcher caught the ball. Initially there was a slight tendency to look at the ball before looking at the screen but, later, most catchers tended to look at the screen before looking at the ball. Rather than being particularly eager to see the ball at a certain time, people appear to adjust their eye movements to the combined requirements of the task. |
P. J. López-Peréz; J. Dampuré; J. A. Hernández-Cabrera; H. A. Barber Semantic parafoveal-on-foveal effects and preview benefits in reading: Evidence from fixation related potentials Journal Article In: Brain and Language, vol. 162, pp. 29–34, 2016. @article{LopezPerez2016, During reading parafoveal information can affect the processing of the word currently fixated (parafovea-on-fovea effect) and words perceived parafoveally can facilitate their subsequent processing when they are fixated on (preview effect). We investigated parafoveal processing by simultaneously recording eye movements and EEG measures. Participants read word pairs that could be semantically associated or not. Additionally, the boundary paradigm allowed us to carry out the same manipulation on parafoveal previews that were displayed until reader's gaze moved to the target words. Event Related Potentials time-locked to the prime-preview presentation showed a parafoveal-on-foveal N400 effect. Fixation Related Potentials time locked to the saccade offset showed an N400 effect related to the prime-target relationship. Furthermore, this later effect interacted with the semantic manipulation of the previews, supporting a semantic preview benefit. These results demonstrate that at least under optimal conditions foveal and parafoveal information can be simultaneously processed and integrated. |
Matthew W. Lowder; Fernanda Ferreira Prediction in the processing of repair disfluencies: Evidence from the Visual-World Paradigm Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 9, pp. 1400–1416, 2016. @article{Lowder2016, Two visual-world eye-tracking experiments investigated the role of prediction in the processing of repair disfluencies (e.g., "The chef reached for some salt uh I mean some ketchup . . ."). Experiment 1 showed that listeners were more likely to fixate a critical distractor item (e.g., pepper) during the processing of repair disfluencies compared with the processing of coordination structures (e.g., ". . . some salt and also some ketchup . . ."). Experiment 2 replicated the findings of Experiment 1 for disfluency versus coordination constructions and also showed that the pattern of fixations to the critical distractor for disfluency constructions was similar to the fixation patterns for sentences employing contrastive focus (e.g., ". . . not some salt but rather some ketchup . . ."). The results suggest that similar mechanisms underlie the processing of repair disfluencies and contrastive focus, with listeners generating sets of entities that stand in semantic contrast to the reparandum in the case of disfluencies or the negated entity in the case of contrastive focus. |
Matthew W. Lowder; Peter C. Gordon Eye-tracking and corpus-based analyses of syntax-semantics interactions in complement coercion Journal Article In: Language, Cognition and Neuroscience, vol. 31, no. 7, pp. 921–939, 2016. @article{Lowder2016a, Previous work has shown that the difficulty associated with processing complex semantic expressions is reduced when the critical constituents appear in separate clauses as opposed to when they appear together in the same clause. We investigated this effect further, focusing in particular on complement coercion, in which an event-selecting verb (e.g. began) combines with a complement that represents an entity (e.g. began the memo). Experiment 1 compared reading times for coercion versus control expressions when the critical verb and complement appeared together in a subject-extracted relative clause (SRC) (e.g. The secretary that began/wrote the memo) compared to when they appeared together in a simple sentence. Readers spent more time processing coercion expressions than control expressions, replicating the typical coercion cost. In addition, readers spent less time processing the verb and complement in SRCs than in simple sentences; however, the magnitude of the coercion cost did not depend on sentence structure. In contrast, Experiment 2 showed that the coercion cost was reduced when the complement appeared as the head of an object-extracted relative clause (ORC) (e.g. The memo that the secretary began/wrote) compared to when the constituents appeared together in an SRC. Consistent with the eye-tracking results of Experiment 2, a corpus analysis showed that expressions requiring complement coercion are more frequent when the constituents are separated by the clause boundary of an ORC compared to when they are embedded together within an SRC. The results provide important information about the types of structural configurations that contribute to reduced difficulty with complex semantic expressions, as well as how these processing patterns are reflected in naturally occurring language. |
Jingyi Lu; Huiyuan Jia; Xiaofei Xie; Qiuhong Wang Missing the best opportunity; who can seize the next one? Agents show less inaction inertia than personal decision makers Journal Article In: Journal of Economic Psychology, vol. 54, pp. 100–112, 2016. @article{Lu2016, Inaction inertia is a prevalent consumer decision bias, whereby missing a superior opportunity decreases the likelihood of acting on a subsequent opportunity in the same domain. We assume that a cognitive focus accounts for the inaction inertia effect. Individuals focus more on losses (the association between the current opportunity and missed opportunity) than gains (the association between the current opportunity and original states), therefore showing the inaction inertia effect. We also propose a self-other difference in inaction inertia: agents exhibit less inaction inertia than personal decision makers as they focus more on gains than losses compared to personal decision makers. In Study 1, agents were less trapped in inaction inertia than personal decision makers. Cognitive focus was measured with eye-tracking techniques in Study 2 and a self-reported item in Study 3. Agents were observed as focusing less on losses than gains compared to personal decision makers. This cognitive focus difference explained the self-other difference in inaction inertia. In Study 4, both types of decision makers were less susceptible to inaction inertia when focusing on gains than losses. |
Heather D. Lucas; Jim M. Monti; Edward McAuley; Patrick D. Watson; Arthur F. Kramer; Neal J. Cohen Relational memory and self-efficacy measures reveal distinct profiles of subjective memory concerns in older adults Journal Article In: Neuropsychology, vol. 30, no. 5, pp. 568–578, 2016. @article{Lucas2016, Objective: Subjective memory concerns (SMCs) in healthy older adults are associated with future decline and can indicate preclinical dementia. However, SMCs may be multiply determined, and often correlate with affective or psychosocial variables rather than with performance on memory tests. Our objective was to identify sensitive and selective methods to disentangle the underlying causes of SMCs. Method: Because preclinical dementia pathology targets the hippocampus, we hypothesized that performance on hippocampally dependent relational memory tests would correlate with SMCs. We thus administered a series of memory tasks with varying dependence on relational memory processing to 91 older adults, along with questionnaires assessing depression, anxiety, and memory self-efficacy. We used correla-tional, regression, and mediation analyses to compare the variance in SMCs accounted for by these measures. Results: Performance on the task most dependent on relational memory processing showed a stronger negative association with SMCs than did other memory performance metrics. SMCs were also negatively associated with memory self-efficacy. These 2 measures, along with age and education, accounted for 40% of the variance in SMCs. Self-efficacy and relational memory were uncorrelated and independent predictors of SMCs. Moreover, self-efficacy statistically mediated the relationship between SMCs and depression and anxiety, which can be detrimental to cognitive aging. Conclusions: These data identify multiple mechanisms that can contribute to SMCs, and suggest that SMCs can both cause and be caused by age-related cognitive decline. Relational memory measures may be effective assays of objective memory difficulties, while assessing self-efficacy could identify detrimental affective responses to cognitive aging. |
Jiří Lukavský; Filip Děchtěrenko Gaze position lagging behind scene content in multiple object tracking: Evidence from forward and backward presentations Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 8, pp. 2456–2468, 2016. @article{Lukavsky2016, In everyday life, people often need to track moving objects. Recently, a topic of discussion has been whether people rely solely on the locations of tracked objects, or take their directions into account in multiple object tracking (MOT). In the current paper, we pose a related question: do people utilise extrapolation in their gaze behaviour, or, in more practical terms, should the mathematical models of gaze behaviour in an MOT task be based on objects' current, past or anticipated positions? We used a data-driven approach with no a priori assumption about the underlying gaze model. We repeatedly presented the same MOT trials forward and backward and collected gaze data. After reversing the data from the backward trials, we gradually tested different time adjustments to find the local maximum of similarity. In a series of four experiments, we showed that the gaze position lagged by approximately 110 ms behind the scene content. We observed the lag in all subjects (Experiment 1). We further experimented to determine whether tracking workload or predictability of movements affect the size of the lag. Low workload led only to a small non-significant shortening of the lag (Experiment 2). Impairing the predictability of objects' trajectories increased the lag (Experiments 3a and 3b). We tested our observations with predictions of a centroid model: we observed a better fit for a model based on the locations of objects 110 ms earlier. We conclude that mathematical models of gaze behaviour in MOT should account for the lags. |
Steven G. Luke; Kiel Christianson Limits on lexical prediction during reading Journal Article In: Cognitive Psychology, vol. 88, pp. 22–60, 2016. @article{Luke2016, Efficient language processing may involve generating expectations about upcoming input. To investigate the extent to which prediction might facilitate reading, a large-scale survey provided cloze scores for all 2689 words in 55 different text passages. Highly predictable words were quite rare (5% of content words), and most words had a more-expected competitor. An eye-tracking study showed sensitivity to cloze probability but no mis-prediction cost. Instead, the presence of a more-expected competitor was found to be facilitative in several measures. Further, semantic and morphosyntactic information was highly predictable even when word identity was not, and this information facilitated reading above and beyond the predictability of the full word form. The results are consistent with graded prediction but inconsistent with full lexical prediction. Implications for theories of prediction in language comprehension are discussed. |
Steven G. Luke; John M. Henderson The influence of content meaningfulness on eye movements across tasks: Evidence from scene viewing and reading Journal Article In: Frontiers in Psychology, vol. 7, pp. 257, 2016. @article{Luke2016a, The present study investigated the influence of content meaningfulness on eye-movement control in reading and scene viewing. Texts and scenes were manipulated to make them uninterpretable, and then eye-movements in reading and scene-viewing were compared to those in pseudo-reading and pseudo-scene viewing. Fixation durations and saccade amplitudes were greater for pseudo-stimuli. The effect of the removal of meaning was seen exclusively in the tail of the fixation duration distribution in both tasks, and the size of this effect was the same across tasks. These findings suggest that eye movements are controlled by a common mechanism in reading and scene viewing. They also indicate that not all eye movements are responsive to the meaningfulness of stimulus content. Implications for models of eye movement control are discussed. |
Mikael Lundqvist; Jonas Rose; Pawel Herman; Scott L. Brincat; Timothy J. Buschman; Earl K. Miller Gamma and beta bursts underlie working memory Journal Article In: Neuron, vol. 90, no. 1, pp. 152–164, 2016. @article{Lundqvist2016, Working memory is thought to result from sustained neuron spiking. However, computational models suggest complex dynamics with discrete oscillatory bursts. We analyzed local field potential (LFP) and spiking from the prefrontal cortex (PFC) of monkeys performing a working memory task. There were brief bursts of narrow-band gamma oscillations (45–100 Hz), varied in time and frequency, accompanying encoding and re-activation of sensory information. They appeared at a minority of recording sites associated with spiking reflecting the to-be-remembered items. Beta oscillations (20–35 Hz) also occurred in brief, variable bursts but reflected a default state interrupted by encoding and decoding. Only activity of neurons reflecting encoding/decoding correlated with changes in gamma burst rate. Thus, gamma bursts could gate access to, and prevent sensory interference with, working memory. This supports the hypothesis that working memory is manifested by discrete oscillatory dynamics and spiking, not sustained activity. |
Judith Lunn; Tim Donovan; Damien Litchfield; Charlie Lewis; Robert Davies; Trevor J. Crawford Saccadic eye movement abnormalities in children with epilepsy Journal Article In: PLoS ONE, vol. 11, no. 8, pp. e0160508, 2016. @article{Lunn2016, Childhood onset epilepsy is associated with disrupted developmental integration of sensorimotor and cognitive functions that contribute to persistent neurobehavioural comorbidities. The role of epilepsy and its treatment on the development of functional integration of motor and cognitive domains is unclear. Oculomotor tasks can probe neurophysiological and neurocognitive mechanisms vulnerable to developmental disruptions by epilepsy-related factors. The study involved 26 patients and 48 typically developing children aged 8–18 years old who performed a prosaccade and an antisaccade task. Analyses compared medicated chronic epilepsy patients and unmedicated controlled epilepsy patients to healthy control children on saccade latency, accuracy and dynamics, errors and correction rate, and express saccades. Patients with medicated chronic epilepsy had impaired and more variable processing speed, reduced accuracy, increased peak velocity and a greater number of inhibitory errors, younger unmedicated patients also showed deficits in error monitoring. Deficits were related to reported behavioural problems in patients. Epilepsy factors were significant predictors of oculomotor functions. An earlier age at onset predicted reduced latency of prosaccades and increased express saccades, and the typical relationship between express saccades and inhibitory errors was absent in chronic patients, indicating a persistent reduction in tonic cortical inhibition and aberrant cortical connectivity. In contrast, onset in later childhood predicted altered antisaccade dynamics indicating disrupted neurotransmission in frontoparietal and oculomotor networks with greater demand on inhibitory control. The observed saccadic abnormalities are consistent with a dysmaturation of subcortical-cortical functional connectivity and aberrant neurotransmission. Eye movements could be used to monitor the impact of epilepsy on neurocognitive development and help assess the risk for poor neurobehavioural outcomes. |
Yingyi Luo; Ming Yan; Shaorong Yan; Xiaolin Zhou; Albrecht W. Inhoff In: Cognitive, Affective and Behavioral Neuroscience, vol. 16, no. 1, pp. 72–92, 2016. @article{Luo2016, In two experiments, we examined the contribution of articulation-specific features to visual word recognition during the reading of Chinese. In spoken Standard Chinese, a syllable with a full tone can be tone-neutralized through sound weakening and pitch contour change, and there are two types of two-character compound words with respect to their articulation variation. One type requires articulation of a full tone for each constituent character, and the other requires a full- and a neutral-tone articulation for the first and second characters, respectively. Words of these two types with identical first characters were selected and embedded in sentences. Native speakers of Standard Chinese were recruited to read the sentences. In Experiment 1, the individual words of a sentence were presented serially at a fixed pace while event-related potentials were recorded. This resulted in less-negative N100 and anterior N250 amplitudes and in more-negative N400 amplitudes when targets contained a neutral tone. Complete sentences were visible in Experiment 2, and eye movements were recorded while participants read. Analyses of oculomotor activity revealed shorter viewing durations and fewer refixations on—and fewer regressive saccades to—target words when their second syllable was articulated with a neutral rather than a full tone. Together, the results indicate that readers represent articulation-specific word properties, that these representations are routinely activated early during the silent reading of Chinese sentences, and that the representations are also used during later stages of word processing. |
Minna Lyons; Urszula M. Marcinkowska; Victoria Moisey; Neil Harrison The effects of resource availability and relationship status on women's preference for facial masculinity in men: An eye-tracking study Journal Article In: Personality and Individual Differences, vol. 95, pp. 25–28, 2016. @article{Lyons2016, Previous research has demonstrated that perceived availability of environmental resources affects the mate choice of females. However, it is unclear whether women's partnership status influences the effects of environmental circumstances on masculinity preference. Further, the role of environmental scarcity on women's gaze patterns when evaluating male faces has not been investigated. The current study investigated how relationship status and environmental factors affected women's gaze patterns and preference towards masculinised and feminised male faces. Twenty-two participants in a long-term romantic relationship, and 26 who were single, were primed with either a high ('wealthy') or low ('scarcity') resource availability scenario. They then completed a facial masculinity/femininity preference task while eye-gaze behaviour was measured. Women in a relationship (but not single women) had an increased preference towards masculine faces in the scarcity condition, compared to the wealthy condition; this preference was also reflected in eye gaze behaviour. In contrast, single women had longer first fixations on feminine rather than masculine faces when evaluating them as long-term partners in the wealthy condition, but no overt preference for either face type. These findings reveal the importance of taking women's relationship status into account in investigations of the role of environmental influences on masculinity preferences. |
MiYoung Kwon; Rong Liu; Lillian Chien Compensation for blur requires increase in field of view and viewing time Journal Article In: PLoS ONE, vol. 11, no. 9, pp. e0162711, 2016. @article{Kwon2016b, Spatial resolution is an important factor for human patternrecognition. In particular, low res- olution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition.The spa- tial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, whichwas repositioned until object identification(moving window paradigm). Field of view requirement, quantified as the number of “views” (windowreposi- tions) for correct recognition,was obtained for three blur levels, including no blur. In experi- ment 2, studying temporal requirement,we determinedthreshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition,we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects.We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objectsmay furtherchallenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids. |
Nayoung Kwon; Patrick Sturt Processing control information in a nominal control construction: An eye-tracking study Journal Article In: Journal of Psycholinguistic Research, vol. 45, no. 4, pp. 779–793, 2016. @article{Kwon2016, In an eye-tracking experiment, we examined the processing of the nominal control construction. Participants' eye-movements were monitored while they read sentences that included either giver control nominals (e.g. promise in Luke's promise to Sophia to photograph himself) or recipient control nominals (e.g. plea in Luke's plea to Sophia to photograph herself). In order to examine both the initial access of control information, and its later use in on-line processing, we combined a manipulation of nominal control with a gender match/mismatch paradigm. Results showed that there was evidence of processing difficulty for giver control sentences (relative to recipient control sentences) at the point where the control dependency was initially created, suggesting that control information was accessed during the early parsing stages. This effect is attributed to a recency preference in the formation of control dependencies; the parser prefers to assign a recent antecedent to PRO. In addition, readers slowed down after reading a reflexive pronoun that mismatched with the gender of the antecedent indicated by the control nominal (e.g. Luke's promise to Sophia to photograph herself). The mismatch cost suggests that control information of the nominal control construction was used to constrain dependency formation involving a controller, PRO and a reflexive, confirming the use of control information in on-line interpretation. |
Nayoung Kwon; Patrick Sturt Attraction effects in honorific agreement in Korean Journal Article In: Frontiers in Psychology, vol. 7, pp. 1302, 2016. @article{Kwon2016a, Previous studies have suggested that sentence processing is mediated by content-addressable direct retrieval processes (McElree, 2000; McElree et al., 2003). However, the memory retrieval processes may differ as a function of the type of dependency. For example, while many studies have reported facilitatory intrusion effects associated with a structurally illicit antecedent during the processing of subject-verb number or person agreement and negative polarity items (Pearlmutter et al., 1999; Xiang et al., 2009; Dillon et al., 2013), studies investigating reflexives have not found consistent evidence of intrusion effects (Parker et al., 2015; Sturt and Kwon, 2015; cf. Nicol and Swinney, 1989; Sturt, 2003). Similarly, the memory retrieval processes could be also sensitive to cross-linguistic differences (cf. Lago et al., 2015). We report one self-paced reading experiment and one eye-tracking experiment that examine the processing of subject-verb honorific agreement, a dependency that is different from those that have been studied to date, in Korean, a typologically different language from those previously studied. The overall results suggest that the retrieval processes underlying the processing of subject-verb honorific agreement in Korean are susceptible to facilitatory intrusion effects from a structurally illicit but feature-matching subject, with a pattern that is similar to subject-verb agreement in English. In addition, the attraction effect was not limited to the ungrammatical sentences but was also found in grammatical sentences. The clear attraction effect in the grammatical sentences suggest that the attraction effect does not solely arise as the result of an error-driven process (cf. Wagers et al., 2009), but is likely also to result from general mechanisms of retrieval processes of activating of potential items in memory (Vasishth et al., 2008). |
Markos Kyritsis; Stephen R. Gulliver; Eva Feredoes Environmental factors and features that influence visual search in a 3D WIMP interface Journal Article In: International Journal of Human-Computer Studies, vol. 92-93, pp. 30–43, 2016. @article{Kyritsis2016, The challenge of moving past the classic Window Icons Menus Pointer (WIMP) interface, i.e. by turning it '3D', has resulted in much research and development. To evaluate the impact of 3D on the 'finding a target picture in a folder' task, we built a 3D WIMP interface that allowed the systematic manipulation of visual depth, visual aides, semantic category distribution of targets versus non-targets; and the detailed measurement of lower-level stimuli features. Across two separate experiments, one large sample web-based experiment, to understand associations, and one controlled lab environment, using eye tracking to understand user focus, we investigated how visual depth, use of visual aides, use of semantic categories, and lower-level stimuli features (i.e. contrast, colour and luminance) impact how successfully participants are able to search for, and detect, the target image. Moreover in the lab-based experiment, we captured pupillometry measurements to allow consideration of the influence of increasing cognitive load as a result of either an increasing number of items on the screen, or due to the inclusion of visual depth. Our findings showed that increasing the visible layers of depth, and inclusion of converging lines, did not impact target detection times, errors, or failure rates. Low-level features, including colour, luminance, and number of edges, did correlate with differences in target detection times, errors, and failure rates. Our results also revealed that semantic sorting algorithms significantly decreased target detection times. Increased semantic contrasts between a target and its neighbours correlated with an increase in detection errors. Finally, pupillometric data did not provide evidence of any correlation between the number of visible layers of depth and pupil size, however, using structural equation modelling, we demonstrated that cognitive load does influence detection failure rates when there is luminance contrasts between the target and its surrounding neighbours. Results suggest that WIMP interaction designers should consider stimulus-driven factors, which were shown to influence the efficiency with which a target icon can be found in a 3D WIMP interface. |
Kaitlin E. W. Laidlaw; Mona J. H. Zhu; Alan Kingstone Looking away: Distractor influences on saccadic trajectory and endpoint in prosaccade and antisaccade tasks Journal Article In: Experimental Brain Research, vol. 234, no. 6, pp. 1637–1648, 2016. @article{Laidlaw2016, Successful target selection often occurs concurrently with distractor inhibition. A better understanding of the former thus requires a thorough study of the competition that arises between target and distractor representations. In the present study, we explore whether the presence of a distractor influences saccade processing via interfering with visual target and/or saccade goal representations. To do this, we asked participants to make either pro- or antisaccade eye movements to a target and measured the change in their saccade trajectory and landing position (collectively referred to as deviation) in response to distractors placed near or far from the saccade goal. The use of an antisaccade paradigm may help to distinguish between stimulus- and goal-related distractor interference, as unlike with prosaccades, these two features are dissociated in space when making a goal-directed antisaccade response away from a visual target stimulus. The present results demonstrate that for both pro- and antisaccades, distractors near the saccade goal elicited the strongest competition, as indicated by greater saccade trajectory deviation and landing position error. Though distractors far from the saccade goal elicited, on average, greater deviation away in antisaccades than in prosaccades, a time-course analysis revealed a significant effect of far-from-goal distractors in prosaccades as well. Considered together, the present findings support the view that goal-related representations most strongly influence the saccade metrics tested, though stimulus-related representations may play a smaller role in determining distractor-based interference effects on saccade execution under certain circumstances. Further, the results highlight the advantage of considering temporal changes in distractor-based interference. |
Caroline Landelle; Anna Montagnini; Laurent Madelain; Frederic R. Danion Eye tracking a self-moved target with complex hand-target dynamics Journal Article In: Journal of Neurophysiology, vol. 116, no. 4, pp. 1859–1870, 2016. @article{Landelle2016, Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring map- ping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ⬃5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. |
Mitchell R. P. LaPointe; Bruce Milliken Semantically incongruent objects attract eye gaze when viewing scenes for change Journal Article In: Visual Cognition, vol. 24, no. 1, pp. 63–77, 2016. @article{LaPointe2016, Past research has shown that change detection performance is often more efficient for target objects that are semantically incongruent with a surrounding scene context than for target objects that are semantically congruent with the scene context. One account of these findings is that attention is attracted to objects for which the identity of the object conflicts with the meaning of the scene, perhaps as a violation of expectancies created by earlier recruitment of scene gist information. An alternative account of the performance benefit for incongruent objects is that attention is more apt to linger on incongruent objects, as perhaps identifying these objects is more difficult due to conflicting information from the scene context. In the current experiment, we present natural scenes in a change detection task while monitoring eye movements. We find that eye gaze is attracted to these objects relatively early during scene processing. |