全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2013 |
Corey D. Holland; Oleg V. Komogortsev Complex eye movement pattern biometrics: The effects of environment and stimulus Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 8, no. 12, pp. 2115–2126, 2013. @article{Holland2013, This paper presents an objective evaluation of the effects of eye tracking specification and stimulus presentation on the biometric viability of complex eye movement patterns. Six spatial accuracy tiers (0.5°, 1.0°, 1.5°, 2.0°, 2.5°, 3.0°), six temporal resolution tiers (1000, 500, 250, 120, 75, 30 Hz), and five stimulus types (simple, complex, cognitive, textual, random) are evaluated to identify acceptable conditions under which to collect eye movement data. The results suggest the use of eye tracking equipment capable of at least 0.5° spatial accuracy and 250 Hz temporal resolution for biometric purposes, whereas stimulus had little effect on the biometric viability of eye movements. |
Andrew Hollingworth; Michi Matsukura; Steven J. Luck Visual working memory modulates low-level saccade target selection: Evidence from rapidly generated saccades in the global effect paradigm Journal Article In: Journal of Vision, vol. 13, no. 13, pp. 1–8, 2013. @article{Hollingworth2013, In three experiments, we examined the influence of visual working memory (VWM) on the metrics of saccade landing position in a global effect paradigm. Participants executed a saccade to the more eccentric object in an object pair appearing on the horizontal midline, to the left or right of central fixation. While completing the saccade task, participants maintained a color in VWM for an unrelated memory task. Either the color of the saccade target matched the memory color (target match), the color of the distractor matched the memory color (distractor match), or the colors of neither object matched the memory color (no match). In the no-match condition, saccades tended to land at the midpoint between the two objects: the global, or averaging, effect. However, when one of the two objects matched VWM, the distribution of landing position shifted toward the matching object, both for target match and for distractor match. VWM modulation of landing position was observed even for the fastest quartile of saccades, with a mean latency as low as 112 ms. Effects of VWM on such rapidly generated saccades, with latencies in the express-saccade range, indicate that VWM interacts with the initial sweep of visual sensory processing, modulating perceptual input to oculomotor systems and thereby biasing oculomotor selection. As a result, differences in memory match produce effects on landing position similar to the effects generated by differences in physical salience. |
Andrew Hollingworth; Michi Matsukura; Steven J. Luck Visual working memory modulates rapid eye movements to simple onset targets Journal Article In: Psychological Science, vol. 24, no. 5, pp. 790–796, 2013. @article{Hollingworth2013a, Representations in visual working memory (VWM) influence attention and gaze control in complex tasks, such as visual search, that require top-down selection to resolve stimulus competition. VWM and visual attention clearly interact, but the mechanism of that interaction is not well understood. In the research reported here, we demonstrated that in the absence of stimulus competition or goal-level biases, VWM representations of object features influence the spatiotemporal dynamics of extremely simple eye movements. The influence of VWM therefore extends into the most basic operations of the oculomotor system. |
Edward Holsinger Representing idioms: Syntactic and contextual effects on idiom processing Journal Article In: Language and Speech, vol. 56, no. 3, pp. 373–394, 2013. @article{Holsinger2013, Recent work on the processing of idiomatic expressions argues against the idea that idioms are simply big words. For example, hybrid models of idiom representation, originally investigated in the context of idiom production, propose a priority of literal computation, and a principled relationship between the conceptual meaning of an idiom, its literal lemmas and its syntactic structure. We examined the predictions of the hybrid representation hypothesis in the domain of idiom comprehension. We conducted two experiments to examine the role of syntactic, lexical and contextual factors on the interpretation of idiomatic expressions. Experiment I examines the role of syntactic compatibility and lexical compatibility on the real-time processing of potentially idiomatic strings. Experiment 2 examines the role of contextual information on idiom processing and how context interacts with lexical information during processing. We find evidence that literal computation plays a causal role in the retrieval of idiomatic meaning and that contextual, lexical and structural information influence the processing of idiomatic strings at early stages during processing, which provide support for the hybrid model of idiom representation in the domain of idiom comprehension. |
Lynn Huestegge; Iring Koch Constraints in task-set control: Modality dominance patterns among effector systems Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 3, pp. 633–637, 2013. @article{Huestegge2013, Flexibility in configuring task sets allows people to adequately respond to environmental stimuli in different contexts, such as in dual-task situations. In the present study, we examined to what extent response control is influenced by the modality of a concurrently executed response. In Experiment 1, participants responded to auditory stimuli with either vocal responses and/or saccades. In Experiment 2, vocal responses were combined with manual responses. In both experiments, we found asymmetric dual-response costs, that is, the response time difference between single- and dual-response conditions varied between response modalities. It is important to note that the same (vocal) response showed substantial dual-response costs when combined with saccades (Experiment 1) but no such costs when combined with manual responses (Experiment 2). Experiment 3, combining saccades with manual responses, revealed stronger dual-response costs for manual responses than for saccades. Together, these findings suggest an ordinal dominance pattern among response modalities, representing flexible, response-based resource scheduling during task-set configuration. |
Stephanie Huette; Christopher T. Kello; Theo Rhodes; Michael J. Spivey Drawing from Memory: Hand-Eye Coordination at Multiple Scales Journal Article In: PLoS ONE, vol. 8, no. 3, pp. e58464, 2013. @article{Huette2013, Eyes move to gather visual information for the purpose of guiding behavior. This guidance takes the form of perceptual-motor interactions on short timescales for behaviors like locomotion and hand-eye coordination. More complex behaviors require perceptual-motor interactions on longer timescales mediated by memory, such as navigation, or designing and building artifacts. In the present study, the task of sketching images of natural scenes from memory was used to examine and compare perceptual-motor interactions on shorter and longer timescales. Eye and pen trajectories were found to be coordinated in time on shorter timescales during drawing, and also on longer timescales spanning study and drawing periods. The latter type of coordination was found by developing a purely spatial analysis that yielded measures of similarity between images, eye trajectories, and pen trajectories. These results challenge the notion that coordination only unfolds on short timescales. Rather, the task of drawing from memory evokes perceptual-motor encodings of visual images that preserve coarse-grained spatial information over relatively long timescales as well. |
Yueh-Nu Hung "What are you looking at?" An eye movement exploration in science text reading Journal Article In: International Journal of Science and Mathematics Education, vol. 12, pp. 241–260, 2013. @article{Hung2013, The main purpose of this research was to investigate how Taiwanese grade 6 readers selected and used information from different print (main text, headings, captions) and visual elements (decorational, representational, interpretational) to comprehend a science text through tracking their eye movement behaviors. Six grade 6 students read a double page of science text written in Chinese during which their eye movements were documented and analyzed using an EyeLink 1000 eye tracker. The results suggest that illustrations received less attention than print; however, readers who had more fixations on illustrations had better comprehension. While both headings and captions were in the print category, the headings received much less attention than did the captions. The article concludes with implications for teaching science reading and suggestions for future research beyond this exploratory case study. |
Florian Hutzler; Isabella Fuchs; Benjamin Gagl; Sarah Schuster; Fabio Richlan; Mario Braun; Stefan Hawelka Parafoveal X-masks interfere with foveal word recognition: Evidence from fixation-related brain potentials Journal Article In: Frontiers in Systems Neuroscience, vol. 7, pp. 33, 2013. @article{Hutzler2013, The boundary paradigm, in combination with parafoveal masks, is the main technique for studying parafoveal preprocessing during reading. The rationale is that the masks (e.g., strings of X's) prevent parafoveal preprocessing, but do not interfere with foveal processing. A recent study, however, raised doubts about the neutrality of parafoveal masks. In the present study, we explored this issue by means of fixation-related brain potentials (FRPs). Two FRP conditions presented rows of five words. The task of the participant was to judge whether the final word of a list was a "new" word, or whether it was a repeated (i.e., "old") word. The critical manipulation was that the final word was X-masked during parafoveal preview in one condition, whereas another condition presented a valid preview of the word. In two additional event-related brain potential (ERP) conditions, the words were presented serially with no parafoveal preview available; in one of the conditions with a fixed timing, in the other word presentation was self-paced by the participants. Expectedly, the valid-preview FRP condition elicited the shortest processing times. Processing times did not differ between the two ERP conditions indicating that "cognitive readiness" during self-paced processing can be ruled out as an alternative explanation for differences in processing times between the ERP and the FRP conditions. The longest processing times were found in the X-mask FRP condition indicating that parafoveal X-masks interfere with foveal word recognition. |
C. Cavina-Pratesi; Constanze Hesse Why do the eyes prefer the index finger? Simultaneous recording of eye and hand movements during precision grasping Journal Article In: Journal of Vision, vol. 13, no. 5, pp. 1–15, 2013. @article{CavinaPratesi2013, Previous research investigating eye movements when grasping objects with precision grip has shown that we tend to fixate close to the contact position of the index finger on the object. It has been hypothesized that this behavior is related to the fact that the index finger usually describes a more variable trajectory than the thumb and therefore requires a higher amount of visual monitoring. We wished to directly test this prediction by creating a grasping task in which either the index finger or the thumb described a more variable trajectory. Experiment 1 showed that the trajectory variability of the digits can be manipulated by altering the direction from which the hand approaches the object. If the start position is located in front of the object (hand-before), the index finger produces a more variable trajectory. In contrast, when the hand approaches the object from a starting position located behind it (hand-behind), the thumb produces a more variable movement path. In Experiment 2, we tested whether the fixation pattern during grasping is altered in conditions in which the trajectory variability of the two digits is reversed. Results suggest that regardless of the trajectory variability, the gaze was always directed toward the contact position of the index finger. Notably, we observed that regardless of our starting position manipulation, the index finger was the first digit to make contact with the object. Hence, we argue that time to contact (and not movement variability) is the crucial parameter which determines where we look during grasping. |
Cindy Chamberland; Jean Saint-Aubin; Marie Andrée Légère The impact of text repetition on content and function words during reading: Further evidence from eye movements Journal Article In: Canadian Journal of Experimental Psychology, vol. 67, no. 2, pp. 94–99, 2013. @article{Chamberland2013, There is ample evidence that reading speed increases when participants read the same text more than once. However, less is known about the impact of text repetition as a function of word class. Some authors suggested that text repetition would mostly benefit content words with little or no effect on function words. In the present study, we examined the effect of multiple readings on the processing of content and function words. Participants were asked to read a short text two times in direct succession. Eye movement analyses revealed the typical multiple readings effect: Repetition decreased the time readers spent fixating words and the probability of fixating critical words. Most importantly, we found that the effect of multiple readings was of the same magnitude for content and function words, and for low- and high-frequency words. Such findings suggest that lexical variables have additive effects on eye movement measures in reading. |
Myriam Chanceaux; Jonathan Grainger Constraints on letter-in-string identification in peripheral vision: Effects of number of flankers and deployment of attention Journal Article In: Frontiers in Psychology, vol. 4, pp. 119, 2013. @article{Chanceaux2013, Effects of non-adjacent flanking elements on crowding of letter stimuli were examined in experiments manipulating the number of flanking elements and the deployment of spatial attention. To this end, identification accuracy of single letters was compared with identification of letter targets surrounded by two, four, or six flanking elements placed symmetrically left and right of the target. Target stimuli were presented left or right of a central fixation, and appeared either unilaterally or with an equivalent number of characters in the contralateral visual field (bilateral presentation). Experiment 1A tested letter targets with random letter flankers, and Experiments 1B and 2 tested letter targets with Xs as flanking stimuli. The results revealed a number of flankers effect that extended beyond standard two-flanker crowding. Flanker interference was stronger with random letter flankers compared with homogeneous Xs, and performance was systematically better under unilateral presentation conditions compared with bilateral presentation. Furthermore, the difference between the zero-flanker and two-flanker conditions was significantly greater under bilateral presentation, whereas the difference between two-flankers and four-flankers did not differ across unilateral and bilateral presentation. The complete pattern of results can be captured by the independent contributions of excessive feature integration and deployment of spatial attention to letter-in-string visibility. |
Myriam Chanceaux; Sebastiaan Mathôt; Jonathan Grainger Flank to the left, flank to the right: Testing the modified receptive field hypothesis of letter-specific crowding Journal Article In: Journal of Cognitive Psychology, vol. 25, no. 6, pp. 774–780, 2013. @article{Chanceaux2013a, The present study tested for effects of number of flankers positioned to the left and to the right of target characters as a function of visual field and stimulus type (letters or shapes). On the basis of the modified receptive field hypothesis (Chanceaux & Grainger, 2012), we predicted that the greatest effects of flanker interference would occur for leftward flankers with letter targets in the left visual field. Target letters and simple shape stimuli were briefly presented and accompanied by either 1, 2, or 3 flankers of the same category either to the left or to the right of the target, and in all conditions with a single flanker on the opposite side. Targets were presented in the left or right visual field at a fixed eccentricity, such that targets and flankers always fell into the same visual field. Results showed greatest interference for leftward flankers associated with letter targets in the left visual field, as predicted by the modified receptive field hypothesis. |
Steve W. C. Chang; Jean-François Gariépy; Michael L. Platt Neuronal reference frames for social decisions in primate frontal cortex Journal Article In: Nature Neuroscience, vol. 16, no. 2, pp. 243–250, 2013. @article{Chang2013, Social decisions are crucial for the success of individuals and the groups that they comprise. Group members respond vicariously to benefits obtained by others, and impairments in this capacity contribute to neuropsychiatric disorders such as autism and sociopathy. We examined the manner in which neurons in three frontal cortical areas encoded the outcomes of social decisions as monkeys performed a reward-allocation task. Neurons in the orbitofrontal cortex (OFC) predominantly encoded rewards that were delivered to oneself. Neurons in the anterior cingulate gyrus (ACCg) encoded reward allocations to the other monkey, to oneself or to both. Neurons in the anterior cingulate sulcus (ACCs) signaled reward allocations to the other monkey or to no one. In this network of received (OFC) and foregone (ACCs) reward signaling, ACCg emerged as an important nexus for the computation of shared experience and social reward. Individual and species-specific variations in social decision-making might result from the relative activation and influence of these areas. |
Chih-Yang Chen; Ziad M. Hafed Postmicrosaccadic enhancement of slow eye movements Journal Article In: Journal of Neuroscience, vol. 33, no. 12, pp. 5375–5386, 2013. @article{Chen2013, Active sensation poses unique challenges to sensory systems because moving the sensor necessarily alters the input sensory stream. Sensory input quality is additionally compromised if the sensor moves rapidly, as during rapid eye movements, making the period immediately after the movement critical for recovering reliable sensation. Here, we studied this immediate postmovement interval for the case of microsaccades during fixation, which rapidly jitter the "sensor" exactly when it is being voluntarily stabilized to maintain clear vision. We characterized retinal-image slip in monkeys immediately after microsaccades by analyzing postmovement ocular drifts. We observed enhanced ocular drifts by up to ~28% relative to premicrosaccade levels, and for up to ~50 ms after movement end. Moreover, we used a technique to trigger full-field image motion contingent on real-time microsaccade detection, and we used the initial ocular following response to this motion as a proxy for changes in early visual motion processing caused by microsaccades. When the full-field image motion started during microsaccades, ocular following was strongly suppressed, consistent with detrimental retinal effects of the movements. However, when the motion started after microsaccades, there was up to ~73% increase in ocular following speed, suggesting an enhanced motion sensitivity. These results suggest that the interface between even the smallest possible saccades and "fixation" includes a period of faster than usual image slip, as well as an enhanced responsiveness to image motion, and that both of these phenomena need to be considered when interpreting the pervasive neural and perceptual modulations frequently observed around the time of microsaccades. |
Harald Clahsen; Loay Balkhair; John Sebastian Schutter; Ian Cunnings The time course of morphological processing in a second language Journal Article In: Second Language Research, vol. 29, no. 1, pp. 7–31, 2013. @article{Clahsen2013, We report findings from psycholinguistic experiments investigating the detailed timing of processing morphologically complex words by proficient adult second (L2) language learners of English in comparison to adult native (L1) speakers of English. The first study employed the masked priming technique to investigate -ed forms with a group of advanced Arabic-speaking learners of English. The results replicate previously found L1/L2 differences in morphological priming, even though in the present experiment an extra temporal delay was offered after the presentation of the prime words.$backslash$nThe second study examined the timing of constraints against inflected forms inside derived words in English using the eye-movement monitoring technique and an additional acceptability judgment task with highly advanced Dutch L2 learners of English in comparison to adult L1 English controls. Whilst offline the L2 learners performed native-like, the eye-movement data showed that their online processing was not affected by the morphological constraint against regular plurals inside derived words in the same way as in native speakers. Taken together, these findings indicate that L2 learners are not just slower than native speakers in processing morphologically complex words, but that the L2 comprehension system employs real-time grammatical analysis (in this case, morphological information) less than the L1 system. |
Alasdair D. F. Clarke; Moreno I. Coco; Frank Keller The impact of attentional, linguistic, and visual features during object naming Journal Article In: Frontiers in Psychology, vol. 4, pp. 927, 2013. @article{Clarke2013, Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. |
Charles Clifton Situational context affects definiteness preferences: Accommodation of presuppositions Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 2, pp. 487–501, 2013. @article{Clifton2013, In 4 experiments, we used self-paced reading and eye tracking to demonstrate that readers are, under some conditions, sensitive to the presuppositions of definite versus indefinite determiner phrases (DPs). Reading was faster when the context stereotypically provided a single possible referent for a definite DP or multiple possible referents for an indefinite DP than when context and DP definiteness were mismatched. This finding goes beyond previous evidence that definite DPs are processed more rapidly than are indefinite DPs when there is a unique or familiar referent in the context, showing that readers are sensitive to the semantics and pragmatics of (in)definiteness. However, the finding was obtained only when readers had to perform a simple arithmetic task between reading a sentence and seeing a question about it. The intervening task may have encouraged them to process the sentence more deeply in order to form a representation that would persist while doing the arithmetic. The methodological implications of this observation are discussed. |
Brian A. Coffman; Piyadasa Kodituwakku; Elizabeth L. Kodituwakku; Lucinda Romero; Nirupama Muniswamy Sharadamma; David Stone; Julia M. Stephen Primary visual response (M100) delays in adolescents with FASD as measured with MEG Journal Article In: Human Brain Mapping, vol. 34, no. 11, pp. 2852–2862, 2013. @article{Coffman2013, Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder. |
Andrew L. Cohen Software for the automatic correction of recorded eye fixation locations in reading experiments Journal Article In: Behavior Research Methods, vol. 45, no. 3, pp. 679–683, 2013. @article{Cohen2013, Because the recorded location of an eyetracking fixation is not a perfect measure of the actual fixated location, the recorded fixation locations must be adjusted before analysis. Fixations are typically corrected manually. Making such changes, however, is time-consuming and necessarily involves a subjective component. The goal of this article is to introduce software to automate parts of the correction process. The initial focus is on the correction of vertical locations and the removal of outliers and ambiguous fixations in reading experiments. The basic idea behind the algorithm is to use linear regression to assign each fixation to a text line and to identify outliers. The freely available software is implemented as a function, fix_align.R, written in R. |
Ian Cunnings; Claudia Felser The role of working memory in the processing of reflexives Journal Article In: Language and Cognitive Processes, vol. 28, no. 9, pp. 188–219, 2013. @article{Cunnings2013, We report results from two eye-movement experiments that examined how differences in working memory (WM) capacity affect readers' application of structural constraints on reflexive anaphor resolution during sentence comprehension. We examined whether binding Principle A, a syntactic constraint on the interpretation of reflexives, is reducible to a memory friendly ‘‘recency'' strategy, and whether WM capacity influences the degree to which readers create anaphoric dependencies ruled out by binding theory. Our results indicate that low and high WM span readers applied Principle A early during processing. However, contrary to previous findings, low span readers also showed immediate intrusion effects of a linearly closer but structurally inaccessible competitor antecedent. We interpret these findings as indicating that although the relative prominence of potential antecedents in WM can affect online anaphor resolution, Principle A is not reducible to a processing or linear distance based ‘‘least effort'' constraint. |
Roberta Daini; Andrea Albonico; Manuela Malaspina; Marialuisa Martelli; Silvia Primativo; Lisa S. Arduino Dissociation in optokinetic stimulation sensitivity between omission and substitution reading errors in neglect dyslexia Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 581, 2013. @article{Daini2013, Although omission and substitution errors in neglect dyslexia (ND) patients have always been considered as different manifestations of the same acquired reading disorder, recently, we proposed a new dual mechanism model. While omissions are related to the exploratory disorder which characterizes unilateral spatial neglect (USN), substitutions are due to a perceptual integration mechanism. A consequence of this hypothesis is that specific training for omission-type ND patients would aim at restoring the oculo-motor scanning and should not improve reading in substitution-type ND. With this aim we administered an optokinetic stimulation (OKS) to two brain-damaged patients with both USN and ND, MA and EP, who showed ND mainly characterized by omissions and substitutions, respectively. MA also showed an impairment in oculo-motor behavior with a non-reading task, while EP did not. The two patients presented a dissociation with respect to their sensitivity to OKS, so that, as expected, MA was positively affected, while EP was not. Our results confirm a dissociation between the two mechanisms underlying omission and substitution reading errors in ND patients. Moreover, they suggest that such a dissociation could possibly be extended to the effectiveness of rehabilitative procedures, and that patients who mainly omit contralesional-sided letters would benefit from OKS. |
Kirsten A. Dalrymple; Alexander K. Gray; Brielle L. Perler; Elina Birmingham; Walter F. Bischof; Jason J. S. Barton; Alan Kingstone Eyeing the eyes in social scenes: Evidence for top-down control of stimulus selection in simultanagnosia Journal Article In: Cognitive Neuropsychology, vol. 30, no. 1, pp. 25–40, 2013. @article{Dalrymple2013, Simultanagnosia is a disorder of visual attention resulting from bilateral parieto-occipital lesions. Healthy individuals look at eyes to infer people's attentional states, but simultanagnosics allocate abnormally few fixations to eyes in scenes. It is unclear why simultanagnosics fail to fixate eyes, but it might reflect that they are (a) unable to locate and fixate them, or (b) do not prioritize attentional states. We compared eye movements of simultanagnosic G.B. to those of healthy subjects viewing scenes normally or through a restricted window of vision. They described scenes and explicitly inferred attentional states of people in scenes. G.B. and subjects viewing scenes through a restricted window made few fixations on eyes when describing scenes, yet increased fixations on eyes when inferring attention. Thus G.B. understands that eyes are important for inferring attentional states and can exert top-down control to seek out and process the gaze of others when attentional states are of interest. |
Michael Dambacher; Timothy J. Slattery; Jinmian Yang; Reinhold Kliegl; Keith Rayner Evidence for direct control of eye movements during reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 5, pp. 1468–1484, 2013. @article{Dambacher2013, It is well established that fixation durations during reading vary with processing difficulty, but there are different views on how oculomotor control, visual perception, shifts of attention, and lexical (and higher cognitive) processing are coordinated. Evidence for a one-to-one translation of input delay into saccadic latency would provide a much needed constraint for current theoretical proposals. Here, we tested predictions of such a direct-control perspective using the stimulus-onset delay (SOD) paradigm. Words in sentences were initially masked and, on fixation, were individually unmasked with a delay (0-, 33-, 66-, 99-ms SODs). In Experiment 1, SODs were constant for all words in a sentence; in Experiment 2, SODs were manipulated on target words, while nontargets were unmasked without delay. In accordance with predictions of direct control, nonzero SODs entailed equivalent increases in fixation durations in both experiments. Yet, a population of short fixations pointed to rapid saccades as a consequence of low-level information at nonoptimal viewing positions rather than of lexical processing. Implications of these results for theoretical accounts of oculomotor control are discussed. |
Natasha Dare; Richard C. Shillcock Serial and parallel processing in reading: Investigating the effects of parafoveal orthographic information on nonisolated word recognition Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 487–504, 2013. @article{Dare2013, We present a novel lexical decision task and three boundary paradigm eye-tracking experiments that clarify the picture of parallel processing in word recognition in context. First, we show that lexical decision is facilitated by associated letter information to the left and right of the word, with no apparent hemispheric specificity. Second, we show that parafoveal preview of a repeat of word n at word n + 1 facilitates reading of word n relative to a control condition with an unrelated word at word n + 1. Third, using a version of the boundary paradigm that allowed for a regressive eye movement, we show no parafoveal ``postview'' effect on reading word n of repeating word n at word n - 1. Fourth, we repeat the second experiment but compare the effects of parafoveal previews consisting of a repeated word n with a transposed central bigram (e.g., caot for coat) and a substituted central bigram (e.g., ceit for coat), showing the latter to have a deleterious effect on processing word n, thereby demonstrating that the parafoveal preview effect is at least orthographic and not purely visual. |
Ido Davidesco; Michal Harel; Michal Ramot; Uri Kramer; Svetlana Kipervasser; Fani Andelman; Miri Y. Neufeld; Gadi Goelman; Itzhak Fried; Rafael Malach Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy Journal Article In: Journal of Neuroscience, vol. 33, no. 3, pp. 1228–1240, 2013. @article{Davidesco2013, One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation ofboth spatial and object-based attention. They were presented with composite stimuli, consisting ofa small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one ofthe objects. We found a consistent increase in broadband high-frequency (30–90Hz) power, but not in visual evoked potentials, associated with spatial attention starting withV1/V2 and continuing throughout the visual hierarchy. The magnitude ofthe attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties. |
Wei-Ying Chen; Piers D. Howe; Alex O. Holcombe Resource demands of object tracking and differential allocation of the resource Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 4, pp. 710–725, 2013. @article{Chen2013a, The attentional processes for tracking moving objects may be largely hemisphere-specific. Indeed, in our first two experiments the maximum object speed (speed limit) for tracking targets in one visual hemifield (left or right) was not significantly affected by a requirement to track additional targets in the other hemifield. When the additional targets instead occupied the same hemifield as the original targets, the speed limit was reduced. At slow target speeds, however, adding a second target to the same hemifield had little effect. At high target speeds, the cost of adding a same-hemifield second target was approximately as large as would occur if observers could only track one of the targets. This shows that performance with a fast-moving target is very sensitive to the amount of resource allocated. In a third experiment, we investigated whether the resources for tracking can be distributed unequally between two targets. The speed limit for a given target was higher if the second target was slow rather than fast, suggesting that more resource was allocated to the faster of the two targets. This finding was statistically significant only for targets presented in the same hemifield, consistent with the theory of independent resources in the two hemifields. Some limited evidence was also found for resource sharing across hemifields, suggesting that attentional tracking resources may not be entirely hemifield-specific. Together, these experiments indicate that the largely hemisphere-specific tracking resource can be differentially allocated to faster targets. |
Joey T. Cheng; Jessica L. Tracy; Tom Foulsham; Alan Kingstone; Joseph Henrich Two ways to the top: Evidence that dominance and prestige are distinct yet viable avenues to social rank and influence Journal Article In: Journal of Personality and Social Psychology, vol. 104, no. 1, pp. 103–125, 2013. @article{Cheng2013, The pursuit of social rank is a recurrent and pervasive challenge faced by individuals in all human societies. Yet, the precise means through which individuals compete for social standing remains unclear. In 2 studies, we investigated the impact of 2 fundamental strategies-Dominance (the use of force and intimidation to induce fear) and Prestige (the sharing of expertise or know-how to gain respect)-on the attainment of social rank, which we conceptualized as the acquisition of (a) perceived influence over others (Study 1), (b) actual influence over others' behaviors (Study 1), and (c) others' visual attention (Study 2). Study 1 examined the process of hierarchy formation among a group of previously unacquainted individuals, who provided round-robin judgments of each other after completing a group task. Results indicated that the adoption of either a Dominance or Prestige strategy promoted perceptions of greater influence, by both group members and outside observers, and higher levels of actual influence, based on a behavioral measure. These effects were not driven by popularity; in fact, those who adopted a Prestige strategy were viewed as likable, whereas those who adopted a Dominance strategy were not well liked. In Study 2, participants viewed brief video clips of group interactions from Study 1 while their gaze was monitored with an eye tracker. Dominant and Prestigious targets each received greater visual attention than targets low on either dimension. Together, these findings demonstrate that Dominance and Prestige are distinct yet viable strategies for ascending the social hierarchy, consistent with evolutionary theory. |
Dana L. Chesney; Nicole M. McNeil; James R. Brockmole; Ken Kelley An eye for relations: Eye-tracking indicates long-term negative effects of operational thinking on understanding of math equivalence Journal Article In: Memory & Cognition, vol. 41, no. 7, pp. 1079–1095, 2013. @article{Chesney2013, Prior knowledge in the domain of mathematics can sometimes interfere with learning and performance in that domain. One of the best examples of this phenomenon is in students' difficulties solving equations with operations on both sides of the equal sign. Elementary school children in the U.S. typically acquire incorrect, operational schemata rather than correct, relational schemata for interpreting equations. Researchers have argued that these operational schemata are never unlearned and can continue to affect performance for years to come, even after relational schemata are learned. In the present study, we investigated whether and how operational schemata negatively affect undergraduates' performance on equations. We monitored the eye movements of 64 undergraduate students while they solved a set of equations that are typically used to assess children's adherence to operational schemata (e.g., 3 + 4 + 5 = 3 + __). Participants did not perform at ceiling on these equations, particularly when under time pressure. Converging evidence from performance and eye movements showed that operational schemata are sometimes activated instead of relational schemata. Eye movement patterns reflective of the activation of relational schemata were specifically lacking when participants solved equations by adding up all the numbers or adding the numbers before the equal sign, but not when they used other types of incorrect strategies. These findings demonstrate that the negative effects of acquiring operational schemata extend far beyond elementary school. |
Kimberly S. Chiew; Todd S. Braver Temporal dynamics of motivation-cognitive control interactions revealed by high-resolution pupillometry Journal Article In: Frontiers in Psychology, vol. 4, pp. 15, 2013. @article{Chiew2013, Motivational manipulations, such as the presence of performance-contingent reward incentives, can have substantial influences on cognitive control. Previous evidence suggests that reward incentives may enhance cognitive performance specifically through increased preparatory, or proactive, control processes. The present study examined reward influences on cognitive control dynamics in the AX-Continuous Performance Task (AX-CPT), using high-resolution pupillometry. In the AX-CPT, contextual cues must be actively maintained over a delay in order to appropriately respond to ambiguous target probes. A key feature of the task is that it permits dissociable characterization of preparatory, proactive control processes (i.e., utilization of context) and reactive control processes (i.e., target-evoked interference resolution). Task performance profiles suggested that reward incentives enhanced proactive control (context utilization). Critically, pupil dilation was also increased on reward incentive trials during context maintenance periods, suggesting trial-specific shifts in proactive control, particularly when context cues indicated the need to overcome the dominant target response bias. Reward incentives had both transient (i.e., trial-by-trial) and sustained (i.e., block-based) effects on pupil dilation, which may reflect distinct underlying processes. The transient pupillary effects were present even when comparing against trials matched in task performance, suggesting a unique motivational influence of reward incentives. These results suggest that pupillometry may be a useful technique for investigating reward motivational signals and their dynamic influence on cognitive control. |
Wonil Choi; Peter C. Gordon Coordination of word recognition and oculomotor control during reading: The role of implicit lexical decisions Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 39, no. 4, pp. 1032–1046, 2013. @article{Choi2013, The coordination of word-recognition and oculomotor processes during reading was evaluated in eye-tracking experiments that examined how word skipping, where a word is not fixated during first-pass reading, is affected by the lexical status of a letter string in the parafovea and ease of recognizing that string. Ease of lexical recognition was manipulated through target-word frequency (Experiment 1) and through repetition priming between prime-target pairs embedded in a sentence (Experiment 2). Using the gaze-contingent boundary technique the target word appeared in the parafovea either with full preview or with transposed-letter (TL) preview. The TL preview strings were nonwords in Experiment 1 (e.g., bilnk created from the target blink), but were words in Experiment 2 (e.g., sacred created from the target scared). Experiment 1 showed greater skipping for high-frequency than low-frequency target words in the full preview condition, but not in the TL preview (nonword) condition. Experiment 2 showed greater skipping for target words that repeated an earlier prime word than for those that did not, with this repetition priming occurring both with preview of the full target and with preview of the target's TL neighbor word. However, time to progress from the word after the target was greater following skips of the TL preview word, whose meaning was anomalous in the sentence context, than following skips of the full preview word whose meaning fit sensibly into the sentence context. Together, the results support the idea that coordination between word-recognition and oculomotor processes occurs at the level of implicit lexical decisions. |
John Christie; Matthew D. Hilchey; Raymond M. Klein Inhibition of return is at the midpoint of simultaneous cues Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 8, pp. 1610–1618, 2013. @article{Christie2013, When multiple cues are presented simultaneously, Klein, Christie, and Morris (Psychonomic Bulletin & Review 12:295-300, 2005) found a gradient of inhibition (of return, IOR), with the slowest simple manual detection responses occurring to targets in the direction of the center of gravity of the cues. Here, we explored the possibility of extending this finding to the saccade response modality, using methods of data analysis that allowed us to consider the relative contributions of the distance from the target to the center of gravity of the array of cues and the nearest element in the cue array. We discovered that the bulk of the IOR effect with multiple cues, in both the previous and present studies, can be explained by the distance between the target and the center of gravity of the cue array. The present results are consistent with the proposal advanced by Klein et al., (2005) suggesting that this IOR effect is due to population coding in the oculomotor pathways (e.g., the superior colliculus) driving the eye movement system toward the center of gravity of the cued array. |
Mitchell J. Callan; Heather J. Ferguson; Markus Bindemann Eye movements to audiovisual scenes reveal expectations of a just world Journal Article In: Journal of Experimental Psychology: General, vol. 142, no. 1, pp. 34–40, 2013. @article{Callan2013, When confronted with bad things happening to good people, observers often engage reactive strategies, such as victim derogation, to maintain a belief in a just world. Although such reasoning is usually made retrospectively, we investigated the extent to which knowledge of another person's good or bad behavior can also bias people's online expectations for subsequent good or bad outcomes. Using a fully crossed design, participants listened to auditory scenarios that varied in terms of whether the characters engaged in morally good or bad behavior while their eye movements were tracked around concurrent visual scenes depicting good and bad outcomes. We found that the good (bad) behavior of the characters influenced gaze preferences for good (bad) outcomes just prior to the actual outcomes being revealed. These findings suggest that beliefs about a person's moral worth encourage observers to foresee a preferred deserved outcome as the event unfolds. We include evidence to show that this effect cannot be explained in terms of affective priming or matching strategies. |
Manuel G. Calvo; Andrés Fernández-Martín Can the eyes reveal a person's emotions? Biasing role of the mouth expression Journal Article In: Motivation and Emotion, vol. 37, no. 1, pp. 202–211, 2013. @article{Calvo2013, In this study we investigated how perception of the eye expression in a face is influenced by the mouth expression, even when only the eyes are directly looked at. The same eyes appeared in a face with either an incongruent smiling, angry, or sad mouth, a congruent mouth, or no mouth. Attention was directed to the eyes by means of cueing and there were no fixations on the mouth. Participants evaluated whether the eyes were happy (or angry, or sad) or not. Results indicated that the smile biased the evaluation of the eyes towards happiness to a greater extent than an angry or a sad mouth did towards anger or sadness. The smiling mouth was also more visually salient than the angry and the sad mouths. We conclude that the role of the eyes as a 'window' to a person's emotional and motivational state is constrained and distorted by the configural projection of an expressive mouth, and that this effect is enhanced by the high visual saliency of the smile. |
Manuel G. Calvo; Andrés Fernández-Martín; Lauri Nummenmaa A smile biases the recognition of eye expressions: Configural projection from a salient mouth Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 6, pp. 1159–1181, 2013. @article{Calvo2013a, A smile is visually highly salient and grabs attention automatically. We investigated how extrafoveally seen smiles influence the viewers' perception of non-happy eyes in a face. A smiling mouth appeared in composite faces with incongruent non-happy (fearful, neutral, etc.) eyes, thus producing blended expressions, or it appeared in intact faces with genuine expressions. Attention to the eye region was spatially cued while foveal vision of the mouth was blocked by gaze-contingent masking. Participants judged whether the eyes were happy or not. Results indicated that the smile biased the evaluation of the eye expression: The same non-happy eyes were more likely to be judged as happy and categorized more slowly as not happy in a face with a smiling mouth than in a face with a non-smiling mouth or with no mouth. This bias occurred when the mouth and the eyes appeared simultaneously and aligned, but also to some extent when they were misaligned and when the mouth appeared after the eyes. We conclude that the highly salient smile projects to other facial regions, thus influencing the perception of the eye expression. Projection serves spatial and temporal integration of face parts and changes. |
Manuel G. Calvo; Aida Gutiérrez-García; Pedro Avero; Daniel Lundqvist Attentional mechanisms in judging genuine and fake smiles: Eye-movement patterns Journal Article In: Emotion, vol. 13, no. 4, pp. 792–802, 2013. @article{Calvo2013b, We investigated the visual attention patterns (i.e., where, when, how frequently, and how long viewers look at each face region) for faces with (a) genuine, enjoyment smiles (i.e., a smiling mouth and happy eyes with the Duchenne marker), (b) fake, nonenjoyment smiles (a smiling mouth but nonhappy eyes: neutral, surprised, fearful, sad, disgusted, or angry), or (c) no smile (and nonhappy eyes). Viewers evaluated whether the faces conveyed happiness ("felt happy") or not, while eye movements were monitored. Results indicated, first, that the smiling mouth captured the first fixation more likely and faster than the eyes, regardless of type of eyes. This reveals similar attentional orienting to genuine and fake smiles. Second, the mouth and, especially, the eyes of faces with fake smiles received more fixations and longer dwell times than those of faces with genuine smiles. This reveals attentional engagement, with a processing cost for fake smiles. Finally, when the mouth of faces with fake smiles was fixated earlier than the eyes, the face was likely to be judged as genuinely happy. This suggests that the first fixation on the smiling mouth biases the viewer to misinterpret the emotional state underlying blended expressions. |
E. Camara; Sanjay G. Manohar; Masud Husain Past rewards capture spatial attention and action choices Journal Article In: Experimental Brain Research, vol. 230, no. 3, pp. 291–300, 2013. @article{Camara2013, The desire to increase rewards and minimize punishing events is a powerful driver in behaviour. Here, we assess how the value of a location affects subsequent deployment of goal-directed attention as well as involuntary capture of attention on a trial-to-trial basis. By tracking eye position, we investigated whether the ability of an irrelevant, salient visual stimulus to capture gaze (stimulus-driven attention) is modulated by that location's previous value. We found that distractors draw attention to them significantly more if they appear at a location previously associated with a reward, even when gazing towards them now leads to punishments. Within the same experiment, it was possible to demonstrate that a location associated with a reward can also bias subsequent goal-directed attention (indexed by action choices) towards it. Moreover, individuals who were vulnerable to being distracted by previous reward history, as indexed by oculomotor capture, were also more likely to direct their actions to those locations when they had a free choice. Even when the number of initial responses was made to be rewarded and punished stimuli were equalized, the effects of previous reward history on both distractibility and action choices remained. Finally, a covert attention task requiring button-press responses rather than overt gaze shifts demonstrated the same pattern of findings. Thus, past rewards can act to modulate both subsequent stimulus-driven as well as goal-directed attention. These findings reveal that there can be surprising short-term costs of using reward cues to regulate behaviour. They show that current valence information, if maintained inappropriately, can have negative subsequent effects, with attention and action choices being vulnerable to capture and bias, mechanisms that are of potential importance in understanding distractibility and abnormal action choices. |
Ian G. M. Cameron; Donald C. Brien; Kira Links; Sarah Robichaud; Jennifer D. Ryan; Douglas P. Munoz; Tiffany W. Chow Changes to saccade behaviors in parkinson's disease following dancing and observation of dancing Journal Article In: Frontiers in Neurology, vol. 4, pp. 22, 2013. @article{Cameron2013, BACKGROUND: The traditional view of Parkinson's disease (PD) as a motor disorder only treated by dopaminergic medications is now shifting to include non-pharmacologic interventions. We have noticed that patients with PD obtain an immediate, short-lasting benefit to mobility by the end of a dance class, suggesting some mechanism by which dancing reduces bradykinetic symptoms. We have also found that patients with PD are unimpaired at initiating highly automatic eye movements to visual stimuli (pro-saccades) but are impaired at generating willful eye movements away from visual stimuli (anti-saccades). We hypothesized that the mechanisms by which a dance class improves movement initiation may generalize to the brain networks impacted in PD (frontal lobe and basal ganglia, BG), and thus could be assessed objectively by measuring eye movements, which rely on the same neural circuitry. METHODS: Participants with PD performed pro- and anti-saccades before, and after, a dance class. "Before" and "after" saccade performance measurements were compared. These measurements were then contrasted with a control condition (observing a dance class in a video), and with older and younger adult populations, who rested for an hour between measurements. RESULTS: We found an improvement in anti-saccade performance following the observation of dance (but not following dancing), but we found a detriment in pro-saccade performance following dancing. CONCLUSION: We suggest that observation of dance induced plasticity changes in frontal-BG networks that are important for executive control. Dancing, in contrast, increased voluntary movement signals that benefited mobility, but interfered with the automaticity of efficient pro-saccade execution. |
Anneloes R. Canestrelli; Willem M. Mak; Ted J. M. Sanders Causal connectives in discourse processing: How differences in subjectivity are reflected in eye movements Journal Article In: Language and Cognitive Processes, vol. 28, no. 9, pp. 1394–1413, 2013. @article{Canestrelli2013, Causal connectives are often considered to provide crucial information about the discourse structure; they signal a causal relation between two text segments. However, in many languages of the world causal connectives specialise in either subjective or objective causal relations. We investigate whether this type of (discourse) information is used during the online processing of causal connectives by focusing on the Dutch connectives want and omdat, both translated by because. In three eye-tracking studies we demonstrate that the Dutch connective want, which is a prototypical marker of subjective CLAIM-ARGUMENT relations, leads to an immediate processing disadvantage compared to omdat, a prototypical marker of objective CONSEQUENCE-CAUSE relations. This effect was observed at the words immediately following the connective, at which point readers cannot yet establish the causal relation on the basis of the content, which means that the effect is solely induced by the connectives. In Experiment 2 we demonstrate that this effect is related to the representation of the first clause of a want relation as a mental state. In Experiment 3, we show that the use of omdat in relations that do not allow for a CONSEQUENCE-CAUSE interpretation leads to serious processing difficulties at the end of those relations. On the basis of these results, we argue that want triggers a subjective mental state interpretation of S1, whereas omdat triggers the construction of an objective CONSEQUENCE-CAUSE relation. These results illustrate that causal connectives provide subtle information about semantic-pragmatic distinctions between types of causal relations, which immediately influences online processing. |
Almudena Capilla; Pascal Belin; Joachim Gross The early spatio-temporal correlates and task independence of cerebral voice processing studied with MEG Journal Article In: Cerebral Cortex, vol. 23, no. 6, pp. 1388–1395, 2013. @article{Capilla2013, Functional magnetic resonance imaging studies have repeatedly provided evidence for temporal voice areas (TVAs) with particular sensitivity to human voices along bilateral mid/anterior superior temporal sulci and superior temporal gyri (STS/STG). In contrast, electrophysiological studies of the spatio-temporal correlates of cerebral voice processing have yielded contradictory results, finding the earliest correlates either at ∼300-400 ms, or earlier at ∼200 ms ("fronto-temporal positivity to voice", FTPV). These contradictory results are likely the consequence of different stimulus sets and attentional demands. Here, we recorded magnetoencephalography activity while participants listened to diverse types of vocal and non-vocal sounds and performed different tasks varying in attentional demands. Our results confirm the existence of an early voice-preferential magnetic response (FTPVm, the magnetic counterpart of the FTPV) peaking at about 220 ms and distinguishing between vocal and non-vocal sounds as early as 150 ms after stimulus onset. The sources underlying the FTPVm were localized along bilateral mid-STS/STG, largely overlapping with the TVAs. The FTPVm was consistently observed across different stimulus subcategories, including speech and non-speech vocal sounds, and across different tasks. These results demonstrate the early, largely automatic recruitment of focal, voice-selective cerebral mechanisms with a time-course comparable to that of face processing. |
Rodrigo A. Cárdenas; Lauren Julius Harris; Mark W. Becker Sex differences in visual attention toward infant faces Journal Article In: Evolution and Human Behavior, vol. 34, no. 4, pp. 280–287, 2013. @article{Cardenas2013, Parental care and alloparental care are major evolutionary dimensions of the biobehavioral repertoire of many species, including human beings. Despite their importance in the course of human evolution and the likelihood that they have significantly shaped human cognition, the nature of the cognitive mechanisms underlying alloparental care is still largely unexplored. In this study, we examined whether one such cognitive mechanism is a visual attentional bias toward infant features, and if so, whether and how it is related to the sex of the adult and the adult's self-reported interest in infants. We used eye-tracking to measure the eye movements of nulliparous undergraduates while they viewed pairs of faces consisting of one adult face (a man or woman) and one infant face (a boy or girl). Subjects then completed two questionnaires designed to measure their interest in infants. Results showed, consistent with the significance of alloparental care in human evolution, that nulliparous adults have an attentional bias toward infants. Results also showed that women's interest in and attentional bias towards infants were stronger and more stable than men's. These findings are consistent with the hypothesis that, due to their central role in infant care, women have evolved a greater and more stable sensitivity to infants. The results also show that eye movements can be successfully used to assess individual differences in interest in infants. © 2013 Elsevier Inc. |
Maria Nella Carminati; Pia Knoeferle Effects of speaker emotional facial expression and listener age on incremental sentence processing Journal Article In: PLoS ONE, vol. 8, no. 9, pp. e72559, 2013. @article{Carminati2013, We report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age = 23) and older (N = 32, Mean age = 64) listeners' visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulation tested predictions by socio-emotional selectivity theory of a positivity effect in older adults. After viewing the emotional face of a speaker (happy or sad) on a computer display, participants were presented simultaneously with two pictures depicting opposite-valence events (positive and negative; IAPS database) while they listened to a sentence referring to one of the events. Participants' eye fixations on the pictures while processing the sentence were increased when the speaker's face was (vs. wasn't) emotionally congruent with the sentence. The enhancement occurred from the early stages of referential disambiguation and was modulated by age. For the older adults it was more pronounced with positive faces, and for the younger ones with negative faces. These findings demonstrate for the first time that emotional facial expressions, similarly to previously-studied speaker cues such as eye gaze and gestures, are rapidly integrated into sentence processing. They also provide new evidence for positivity effects in older adults during situated sentence processing. |
Thomas C. Cassey; David R. Evens; Rafal Bogacz; James A. R. Marshall; Casimir J. H. Ludwig Adaptive sampling of information during perceptual decision-making Journal Article In: PLoS ONE, vol. 8, no. 11, pp. e78993, 2013. @article{Cassey2013, In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two- alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy. |
Francisco M. Costela; Michael B. McCamy; Stephen L. Macknik; Jorge Otero-Millan; Susana Martinez-Conde Microsaccades restore the visibility of minute foveal targets Journal Article In: PeerJ, vol. 1, pp. 1–14, 2013. @article{Costela2013, Stationary targets can fade perceptually during steady visual fixation, a phenomenon known as Troxler fading. Recent research found that microsaccades-small, involuntary saccades produced during attempted fixation-can restore the visibility of faded targets, both in the visual periphery and in the fovea. Because the targets tested previously extended beyond the foveal area, however, the ability of microsaccades to restore the visibility of foveally-contained targets remains unclear. Here, subjects reported the visibility of low-to-moderate contrast targets contained entirely within the fovea during attempted fixation. The targets did not change physically, but their visibility varied intermittently during fixation, in an illusory fashion (i.e., foveal Troxler fading). Microsaccade rates increased significantly before the targets became visible, and decreased significantly before the targets faded, for a variety of target contrasts. These results support previous research linking microsaccade onsets to the visual restoration of peripheral and foveal targets, and extend the former conclusions to minute targets contained entirely within the fovea. Our findings suggest that the involuntary eye movements produced during attempted fixation do not always prevent fading-in either the fovea or the periphery-and that microsaccades can restore perception, when fading does occur. Therefore, microsaccades are relevant to human perception of foveal stimuli. |
M. Gabriela Costello; Dantong Zhu; Emilio Salinas; Terrence R. Stanford Perceptual modulation of motor-but not visual-responses in the frontal eye field during an urgent-decision task Journal Article In: Journal of Neuroscience, vol. 33, no. 41, pp. 16394–16408, 2013. @article{Costello2013, Neuronal activity in the frontal eye field (FEF) ranges from purely motor (related to saccade production) to purely visual (related to stimulus presence). According to numerous studies, visual responses correlate strongly with early perceptual analysis of the visual scene, including the deployment of spatial attention, whereas motor responses do not. Thus, functionally, the consensus is that visually responsive FEF neurons select a target among visible objects, whereas motor-related neurons plan specific eye movements based on such earlier target selection. However, these conclusions are based on behavioral tasks that themselves promote a serial arrangement of perceptual analysis followed by motor planning. So, is the presumed functional hierarchy in FEF an intrinsic property of its circuitry or does it reflect just one possible mode of operation? We investigate this in monkeys performing a rapid-choice task in which, crucially, motor planning always starts ahead of task-critical perceptual analysis, and the two relevant spatial locations are equally informative and equally likely to be target or distracter. We find that the choice is instantiated in FEF as a competition between oculomotor plans, in agreement with model predictions. Notably, although perception strongly influences the motor neurons, it has little if any measurable impact on the visual cells; more generally, the more dominant the visual response, the weaker the perceptual modulation. The results indicate that, contrary to expectations, during rapid saccadic choices perceptual information may directly modulate ongoing saccadic plans, and this process is not contingent on prior selection of the saccadic goal by visually driven FEF responses. |
Christopher D. Cowper-Smith; Gail A. Eskes; David A. Westwood Motor inhibition of return can affect prepared reaching movements Journal Article In: Neuroscience Letters, vol. 541, pp. 83–86, 2013. @article{CowperSmith2013a, Inhibition of return (IOR) is a widely studied phenomenon that is thought to affect attention, eye movements, or reaching movements, in order to promote orienting responses toward novel stimuli. Previous research in our laboratory demonstrated that the motor form of saccadic IOR can arise from late-stage response execution processes. In the present study, we were interested in whether the same is true of reaching responses. If IOR can emerge from processes operating at or around the time of response execution, then IOR should be observed even when participants have fully prepared their responses in advance of the movement initiation signal. Similar to the saccadic system, our results reveal that IOR can be implemented as a late-stage execution bias in the reaching control system. |
Christopher D. Cowper-Smith; Jonathan W. Harris; Gail A. Eskes; David A. Westwood Spatial interactions between successive eye and arm movements: Signal type matters Journal Article In: PLoS ONE, vol. 8, no. 3, pp. e58850, 2013. @article{CowperSmith2013b, Spatial interactions between consecutive movements are often attributed to inhibition of return (IOR), a phenomenon in which responses to previously signalled locations are slower than responses to unsignalled locations. In two experiments using peripheral target signals offset by 0°, 90°, or 180°, we show that consecutive saccadic (Experiment 1) and reaching (Experiment 3) responses exhibit a monotonic pattern of reaction times consistent with the currently established spatial distribution of IOR. In contrast, in two experiments with central target signals (i.e., arrowheads pointing at target locations), we find a non-monotonic pattern of reaction times for saccades (Experiment 2) and reaching movements (Experiment 4). The difference in the patterns of results observed demonstrates different behavioral effects that depend on signal type. The pattern of results observed for central stimuli are consistent with a model in which neural adaptation is occurring within motor networks encoding movement direction in a distributed manner. |
Christopher D. Cowper-Smith; David A. Westwood Motor IOR revealed for reaching Journal Article In: Attention, Perception, and Psychophysics, vol. 75, no. 8, pp. 1914–1922, 2013. @article{CowperSmith2013, Inhibition of return (IOR) is a spatial phenomenon that is thought to promote visual search functions by biasing attention and eye movements toward novel locations. Considerable research suggests distinct sensory and motor flavors of IOR, but it is not clear whether the motor type can affect responses other than eye movements. Most studies claiming to reveal motor IOR in the reaching control system have been confounded by their use of peripheral signals, which can invoke sensory rather than motor-based inhibitory effects. Other studies have used central signals to focus on motor, rather than sensory, effects in arm movements but have failed to observe IOR and have concluded that the motor form of IOR is restricted to the oculomotor system. Here, we show the first clear evidence that motor IOR can be observed for reaching movements when participants respond to consecutive central stimuli. This observation suggests that motor IOR serves a more general function than the facilitation of visual search, perhaps reducing the likelihood of engaging in repetitive behavior. |
Michele A. Cox; Michael C. Schmid; Andrew J. Peters; Richard C. Saunders; David A. Leopold; Alexander Maier Receptive field focus of visual area V4 neurons determines responses to illusory surfaces Journal Article In: Proceedings of the National Academy of Sciences, vol. 110, no. 42, pp. 17095–17100, 2013. @article{Cox2013, Illusory figures demonstrate the visual system's ability to infer surfaces under conditions of fragmented sensory input. To investigate the role of midlevel visual area V4 in visual surface completion, we used multielectrode arrays to measure spiking responses to two types of visual stimuli: Kanizsa patterns that induce the perception of an illusory surface and physically similar control stimuli that do not. Neurons in V4 exhibited stronger and sometimes rhythmic spiking responses for the illusion-promoting configurations compared with controls. Moreover, this elevated response depended on the precise alignment of the neuron's peak visual field sensitivity (receptive field focus) with the illusory surface itself. Neurons whose receptive field focus was over adjacent inducing elements, less than 1.5° away, did not show response enhancement to the illusion. Neither receptive field sizes nor fixational eye movements could account for this effect, which was present in both single-unit signals and multiunit activity. These results suggest that the active perceptual completion of surfaces and shapes, which is a fundamental problem in natural visual experience, draws upon the selective enhancement of activity within a distinct subpopulation of neurons in cortical area V4. |
Abbie L. Coy; Samuel B. Hutton Lateral asymmetry in saccadic eye movements during face processing: The role of individual differences in schizotypy Journal Article In: Cognitive Neuroscience, vol. 4, no. 2, pp. 66–72, 2013. @article{Coy2013, Healthy individuals with high as compared to low levels of schizotypal personality traits make more first saccades to the left side of faces, suggesting increased right hemisphere (RH) dominance for face processing. Patients with schizophrenia, however, show attenuated or reversed RH dominance for face processing. It is unclear whether the increased RH dominance found in high schizotypes is specific to face processing or whether it is also observable for other stimuli matched in terms of low-level visual properties. We measured gaze to faces and symmetrical fractal patterns and found higher Magical Ideation (MI) is associated with an increased left-side bias for initial saccade landing points and dwell times when free-viewing faces. These laterality biases were unaffected by facial emotion. Schizotypy scores were not related to laterality biases when viewing fractals. Our results provide further evidence that high schizotypy is associated with an increase in RH dominance for face processing. |
Abbie L. Coy; Samuel B. Hutton The influence of hallucination proneness and social threat on time perception Journal Article In: Cognitive Neuropsychiatry, vol. 18, no. 6, pp. 463–476, 2013. @article{Coy2013a, Introduction. Individuals with schizophrenia frequently report disturbances in time perception, but the precise nature of such deficits and their relation to specific symptoms of the disorder is unclear. We sought to determine the relationship between hallucination proneness and time perception in healthy individuals, and whether this relationship is moderated by hypervigilance to threat-related stimuli. Methods. 206 participants completed the Revised Launay-Slade Hallucination Scale (LSHS-R) and a time reproduction task in which, on each trial, participants viewed a face (happy, angry, neutral, or fearful) for between 1 and 5 s and then reproduced the time period with a spacebar press. Results. High LSHS-R scores were associated with longer time estimates, but only during exposure to angry faces. A factor analysis of LSHS-R scores identified a factor comprising items related to reality monitoring, and this factor was most associated with the longer time estimates. Conclusions. During exposure to potential threat in the environment, duration estimates increase with hallucination proneness. The experience of feeling exposed to threat for longer may serve to maintain a state of hypervigilance which has been shown previously to be associated with positive symptoms of schizophrenia. |
Lei Cui; Denis Drieghe; Guoli Yan; Xuejun Bai; Hui Chi; Simon P. Liversedge Parafoveal processing across different lexical constituents in Chinese reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 2, pp. 403–416, 2013. @article{Cui2013a, We report a boundary paradigm eye movement experiment to investigate whether the linguistic category of a two-character Chinese string affects how the second character of that string is processed in the parafovea during reading. We obtained clear preview effects in all conditions but, more impor- tantly, found parafoveal-on-foveal effects whereby a nonsense preview of the second character influ- enced fixations on the first character. This effect occurred for monomorphemic words, but not for compound words or phrases. Also, in a word boundary demarcation experiment, we demonstrate that Chinese readers are not always consistent in their judgements of which characters in a sentence constitute words. We conclude that information regarding the combinatorial properties of characters in Chinese is used online to moderate the extent to which parafoveal characters are processed. |
Lei Cui; Guoli Yan; Xuejun Bai; Jukka Hyönä; Suiping Wang; Simon P. Liversedge Processing of compound word characters in reading chinese: An eye movement contingent display change study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 527–547, 2013. @article{Cui2013b, Readers' eyemovements weremonitored as they read Chinese two-constituent compound words in sentence contexts. The first compound-word constituent was either an infrequent character with a highly predictable second constituent or a frequent character with an unpredictable second constituent. The parafoveal preview of the second constituent was manipulated, with four preview conditions: identical to the correct form; a semantically related character to the second constituent; a semantically unrelated character to the second constituent; and a pseudocharacter. An invisible boundary was set between the two constituents; when the eyes moved across the boundary, the previewed character was changed to its intended form. The main findings were that preview effects occurred for the second constituent of the compound word. Providing an incorrect preview of the second constituent affected fixations on the first constituent, but only when the second constituent was predictable from the first. The frequency of the initial character of the compound constrained the identity of the second character, and this in turnmodulated the extent to which the semantic characteristics of the preview influenced processing of the second constituent and the compound word as a whole. The results are considered in relation to current accounts of Chinese compound-word recognition and the constraint hypothesis of Hyönä, Bertram, and Pollatsek (2004). We conclude that word identification in Chinese is flexible, and parafoveal processing of upcoming characters is influenced both by the characteristics of the fixated character and by its relationship with the characters in the parafovea. |
Lei Cui; Guoli Yan; Xuejun Bai; Jukka Hyönä; Suiping Wang; Simon P. Liversedge Processing of compound-word characters in reading Chinese: An eye-movement-contingent display change study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 527–547, 2013. @article{Cui2013, Readers' eye movements were monitored as they read Chinese two-constituent compound words in sentence contexts. The first compound-word constituent was either an infrequent character with a highly predictable second constituent or a frequent character with an unpredictable second constituent. The parafoveal preview of the second constituent was manipulated, with four preview conditions: identical to the correct form; a semantically related character to the second constituent; a semantically unrelated character to the second constituent; and a pseudocharacter. An invisible boundary was set between the two constituents; when the eyes moved across the boundary, the previewed character was changed to its intended form. The main findings were that preview effects occurred for the second constituent of the compound word. Providing an incorrect preview of the second constituent affected fixations on the first constituent, but only when the second constituent was predictable from the first. The frequency of the initial character of the compound constrained the identity of the second character, and this in turn modulated the extent to which the semantic characteristics of the preview influenced processing of the second constituent and the compound word as a whole. The results are considered in relation to current accounts of Chinese compound-word recognition and the constraint hypothesis of Hyönä, Bertram, and Pollatsek ( 2004 ). We conclude that word identification in Chinese is flexible, and parafoveal processing of upcoming characters is influenced both by the characteristics of the fixated character and by its relationship with the characters in the parafovea. |
Yuwei Cui; Liu D. Liu; Farhan A. Khawaja; Christopher C. Pack; Daniel A. Butts Diverse suppressive Iinfluences in area MT and selectivity to complex motion features Journal Article In: Journal of Neuroscience, vol. 33, no. 42, pp. 16715–16728, 2013. @article{Cui2013c, Neuronal selectivity results from both excitatory and suppressive inputs to a given neuron. Suppressive influences can often significantly modulate neuronal responses and impart novel selectivity in the context of behaviorally relevant stimuli. In this work, we use a naturalistic optic flow stimulus to explore the responses of neurons in the middle temporal area (MT) of the alert macaque monkey; these responses are interpreted using a hierarchical model that incorporates relevant nonlinear properties of upstream processing in the primary visual cortex (V1). In this stimulus context, MT neuron responses can be predicted from distinct excitatory and suppressive components. Excitation is spatially localized and matches the measured preferred direction of each neuron. Suppression is typically composed of two distinct components: (1) a directionally untuned component, which appears to play the role of surround suppression and normalization; and (2) a direction-selective component, with comparable tuning width as excitation and a distinct spatial footprint that is usually partially overlapping with excitation. The direction preference of this direction-tuned suppression varies widely across MT neurons: approximately one-third have overlapping suppression in the opposite direction as excitation, and many other neurons have suppression with similar direction preferences to excitation. There is also a population of MT neurons with orthogonally oriented suppression. We demonstrate that direction-selective suppression can impart selectivity of MT neurons to more complex velocity fields and that it can be used for improved estimation of the three-dimensional velocity of moving objects. Thus, considering MT neurons in a complex stimulus context reveals a diverse set of computations likely relevant for visual processing in natural visual contexts. |
Susanne Brouwer; Holger Mitterer; Falk Huettig Discourse context and the recognition of reduced and canonical spoken words Journal Article In: Applied Psycholinguistics, vol. 34, no. 3, pp. 519–539, 2013. @article{Brouwer2013, In two eye-tracking experiments we examined whether wider discourse information helps the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to sentences from a casual speech corpus containing canonical and reduced target words. Target word recognition was assessed by measuring eye fixation proportions to four printed words on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor and an unrelated distractor. Target sentences were presented in isolation or with a wider discourse context. Experiment 1 revealed that target recognition was facilitated by wider discourse information. Importantly, the recognition of reduced forms improved significantly when preceded by strongly rather than by weakly supportive discourse contexts. This was not the case for canonical forms: listeners' target word recognition was not dependent on the degree of supportive context. Experiment 2 showed that the differential context effects in Experiment 1 were not due to an additional amount of speaker information. Thus, these data suggest that in natural settings a strongly supportive discourse context is more important for the recognition of reduced forms than the recognition of canonical forms. |
Harriet R. Brown; Karl J. Friston The functional anatomy of attention: A DCM study Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 784, 2013. @article{Brown2013, Recent formulations of attention—in terms of predictive coding—associate attentional gain with the expected precision of sensory information. Formal models of the Posner paradigm suggest that validity effects can be explained in a principled (Bayes optimal) fashion in terms of a cue-dependent setting of precision or gain on the sensory channels reporting anticipated target locations, which is updated selectively by invalid targets. This normative model is equipped with a biologically plausible process theory in the form of predictive coding, where precision is encoded by the gain of superficial pyramidal cells reporting prediction error. We used dynamic causal modeling to assess the evidence in magnetoencephalographic responses for cue-dependent and top-down updating of superficial pyramidal cell gain. Bayesian model comparison suggested that it is almost certain that differences in superficial pyramidal cells gain—and its top-down modulation—contribute to observed responses; and we could be more than 80% certain that anticipatory effects on post-synaptic gain are limited to visual (extrastriate) sources. These empirical results speak to the role of attention in optimizing perceptual inference and its formulation in terms of predictive coding. |
Enzo P. Brunetti; Pedro E. Maldonado; Francisco Aboitiz Phase synchronization of delta and theta oscillations increase during the detection of relevant lexical information Journal Article In: Frontiers in Psychology, vol. 4, pp. 308, 2013. @article{Brunetti2013, During monitoring of the discourse, the detection of the relevance of incoming lexical information could be critical forits incorporation to updatemental representations inmemory. Because, in these situations, the relevance for lexical information is defined by abstract rules that are maintained in memory, a central aspect to elucidate is how an abstract level of knowledge maintained in mind mediates the detection of the lower-level semantic information. In the present study, we propose that neuronal oscillations participate in the detection of relevant lexical information, based on “kept in mind” rules deriving from more abstract semantic information. We tested our hypothesis using an experimental paradigm that restricted the detection of relevance to inferences based on explicit information, thus controlling for ambiguities derived from implicit aspects. We used a categorization task, in which the semantic relevance was previously defined based on the congruency between a kept in mind category (abstract knowledge), and the lexical semantic information presented.Ourresultsshowthatduringthedetectionoftherelevantlexicalinformation,phase synchronization of neuronal oscillations selectively increases in delta and theta frequency bands during the interval of semantic analysis. These increments occurred irrespective of the semantic category maintained in memory, had a temporal profile specific for each subject, and were mainly induced, as they had no effect on the evoked mean global field power. Also, recruitment of an increased number of pairs of electrodes was a robust observation during the detection of semantic contingent words. These results are consistent with the notion that the detection of relevant lexical information based on a particular semantic rule, could be mediated by increasing the global phase synchronization of neuronal oscillations, which may contribute to the recruitment of an extended number of cortical regions. |
Janet H. Bultitude; Stefan Van der Stigchel; Tanja C. W. Nijboer Prism adaptation alters spatial remapping in healthy individuals: Evidence from double-step saccades Journal Article In: Cortex, vol. 49, no. 3, pp. 759–770, 2013. @article{Bultitude2013, The visual system is able to represent and integrate large amounts of information as we move our gaze across a scene. This process, called spatial remapping, enables the construction of a stable representation of our visual environment despite constantly changing retinal images. Converging evidence implicates the parietal lobes in this process, with the right hemisphere having a dominant role. Indeed, lesions to the right parietal lobe (e.g., leading to hemispatial neglect) frequently result in deficits in spatial remapping. Research has demonstrated that recalibrating visual, proprioceptive and motor reference frames using prism adaptation ameliorates neglect symptoms and induces neglect-like performance in healthy people - one example of the capacity for rapid neural plasticity in response to new sensory demands. Because of the influence of prism adaptation on parietal functions, the present research investigates whether prism adaptation alters spatial remapping in healthy individuals. To this end twenty-eight undergraduates completed blocks of a double-step saccade (DSS) task after sham adaptation and adaptation to leftward- or rightward-shifting prisms. The results were consistent with an impairment in spatial remapping for left visual field targets following adaptation to leftward-shifting prisms. These results suggest that temporarily realigning spatial representations using sensory-motor adaptation alters right-hemisphere remapping processes in healthy individuals. The implications for the possible mechanisms of the amelioration of hemispatial neglect after prism adaptation are discussed. |
Antimo Buonocore; Robert D. McIntosh Attention modulates saccadic inhibition magnitude Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 6, pp. 1051–1059, 2013. @article{Buonocore2013, Visual transient events during ongoing eye movement tasks inhibit saccades within a precise temporal window, spanning from around 60-120 ms after the event, having maximum effect at around 90 ms. It is not yet clear to what extent this saccadic inhibition phenomenon can be modulated by attention. We studied the saccadic inhibition induced by a bright flash above or below fixation, during the preparation of a saccade to a lateralized target, under two attentional manipulations. Experiment 1 demonstrated that exogenous precueing of a distractor's location reduced saccadic inhibition, consistent with inhibition of return. Experiment 2 manipulated the relative likelihood that a distractor would be presented above or below fixation. Saccadic inhibition magnitude was relatively reduced for distractors at the more likely location, implying that observers can endogenously suppress interference from specific locations within an oculomotor map. We discuss the implications of these results for models of saccade target selection in the superior colliculus. |
Wesley K. Burge; Lesley A. Ross; Franklin R. Amthor; William G. Mitchell; Alexander Zotov; Kristina M. Visscher Processing speed training increases the efficiency of attentional resource allocation in young adults Journal Article In: Frontiers in Human Neuroscience, vol. 7, pp. 684, 2013. @article{Burge2013, Cognitive training has been shown to improve performance on a range of tasks. However, the mechanisms underlying these improvements are still unclear. Given the wide range of transfer effects, it is likely that these effects are due to a factor common to a wide range of tasks. One such factor is a participant's efficiency in allocating limited cognitive resources. The impact of a cognitive training program, Processing Speed Training (PST), on the allocation of resources to a set of visual tasks was measured using pupillometry in 10 young adults as compared to a control group of a 10 young adults (n = 20). PST is a well-studied computerized training program that involves identifying simultaneously presented central and peripheral stimuli. As training progresses, the task becomes increasingly more difficult, by including peripheral distracting stimuli and decreasing the duration of stimulus presentation. Analysis of baseline data confirmed that pupil diameter reflected cognitive effort. After training, participants randomized to PST used fewer attentional resources to perform complex visual tasks as compared to the control group. These pupil diameter data indicated that PST appears to increase the efficiency of attentional resource allocation. Increases in cognitive efficiency have been hypothesized to underlie improvements following experience with action video games, and improved cognitive efficiency has been hypothesized to underlie the benefits of PST in older adults. These data reveal that these training schemes may share a common underlying mechanism of increasing cognitive efficiency in younger adults. |
Melanie R. Burke; P. Bramley; Claudia C. Gonzalez; D. J. McKeefry The contribution of the right supra-marginal gyrus to sequence learning in eye movements Journal Article In: Neuropsychologia, vol. 51, no. 14, pp. 3048–3056, 2013. @article{Burke2013, We investigated the role of the human right Supra-Marginal Gyrus (SMG) in the generation of learned eye movement sequences. Using MRI-guided transcranial magnetic stimulation (TMS) we disrupted neural activity in the SMG whilst human observers performed saccadic eye movements to multiple presentations of either predictable or random target sequences. For the predictable sequences we observed shorter saccadic latencies from the second presentation of the sequence. However, these anticipatory improvements in performance were significantly reduced when TMS was delivered to the right SMG during the inter-trial retention periods. No deficits were induced when TMS was delivered concurrently with the onset of the target visual stimuli. For the random version of the task, neither delivery of TMS to the SMG during the inter-trial period nor during the presentation of the target visual stimuli produced any deficit in performance that was significantly different from the no-TMS or control conditions. These findings demonstrate that neural activity within the right SMG is causally linked to the ability to perform short latency predictive saccades resulting from sequence learning. We conclude that neural activity in rSMG constitutes an instruction set with spatial and temporal directives that are retained and subsequently released for predictive motor planning and responses. |
2012 |
Hazel I. Blythe; Feifei Liang; Chuanli Zang; Jingxin Wang; Guoli Yan; Xuejun Bai; Simon P. Liversedge Inserting spaces into Chinese text helps readers to learn new words: An eye movement study Journal Article In: Journal of Memory and Language, vol. 67, no. 2, pp. 241–254, 2012. @article{Blythe2012, We examined whether inserting spaces between words in Chinese text would help children learn to read new vocabulary. We recorded adults' and 7- to 10-year-old children's eye movements as they read new 2-character words, each embedded in four explanatory sentences (the learning session). Participants were divided into learning subgroups - half read word spaced sentences, and half read unspaced sentences. In the test session participants read the new words again, each in one new sentence; here, all participants read unspaced text. In the learning session, participants in the spaced group read the new words more quickly than participants in the unspaced group. Further, children in the spaced group maintained this benefit in the test session (unspaced text). In relation to three different models of Chinese lexical identification, we argue that the spacing manipulation allowed the children to form either stronger connections between the two characters' representations and the corresponding, novel word representation, or to form a more fully specified representation of the word itself. |
Stefan M. Wierda; Hedderik Rijn; Niels A. Taatgen; Sander Martens Pupil dilation deconvolution reveals the dynamics of attention at high temporal resolution Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 22, pp. 8456–8460, 2012. @article{Wierda2012, The size of the human pupil increases as a function of mental effort. However, this response is slow, and therefore its use is thought to be limited to measurements of slow tasks or tasks in which meaningful events are temporally well separated. Here we show that high-temporal-resolution tracking of attention and cognitive processes can be obtained from the slow pupillary response. Using automated dilation deconvolution, we isolated and tracked the dy- namics of attention in a fast-paced temporal attention task, al- lowing us to uncover the amount of mental activity that is critical for conscious perception of relevant stimuli. We thus found evi- dence for specific temporal expectancy effects in attention that have eluded detection using neuroimaging methods such as EEG. Combining this approach with other neuroimaging techniques can open many research opportunities to study the temporal dynamics of the mind's inner eye in great detail. |
Guoli Yan; Xuejun Bai; Chuanli Zang; Qian Bian; Lei Cui; Wei Qi; Keith Rayner; Simon P. Liversedge Using stroke removal to investigate Chinese character identification during reading: Evidence from eye movements Journal Article In: Reading and Writing, vol. 25, no. 5, pp. 951–979, 2012. @article{Yan2012, We explored the effect of stroke removal from Chinese characters on eye movements during reading to examine the role of stroke encoding in character identification. Experimental sentences were comprised of characters with different proportions of strokes removed (15, 30, and 50%), and different types of strokes removed (beginning, ending, and strokes that ensured the configuration of the character was retained). Reading times, number of fixations and regression measures all showed that Chinese characters with 15% of strokes removed were as easy to read as Chinese characters without any strokes removed. However, when 30%, or more of a character's strokes were removed, reading characters with their configuration retained was easiest, characters with ending strokes removed were more difficult, whilst characters with beginning strokes removed were most difficult to read. The results strongly suggest that not all strokes within a character have equal status during character identification, and a flexible stroke encoding system must underlie successful character identification during Chinese reading. |
Ming Yan; Sarah Risse; Xiaolin Zhou; Reinhold Kliegl Preview fixation duration modulates identical and semantic preview benefit in Chinese reading Journal Article In: Reading and Writing, vol. 25, no. 5, pp. 1093–1111, 2012. @article{Yan2012a, Semantic preview benefit from parafoveal words is critical for proposals of distributed lexical processing during reading. Semantic preview benefit has been demonstrated for Chinese reading with the boundary paradigm in which unrelated or semantically related previews of a target word "N" + 1 are replaced by the target word once the eyes cross an invisible boundary located after word "N" (Yan et al., 2009); for the target word in position "N" + 2, only identical compared to unrelated-word preview led to shorter fixation times on the target word (Yan et al., in press). A reanalysis of these data reveals that identical and semantic preview benefits depend on preview duration (i.e., the fixation duration on the preboundary word). Identical preview benefit from word "N" + 1 increased with preview duration. The identical preview benefit was also significant for "N" + 2, but did not significantly interact with preview duration. The previously reported semantic preview benefit from word "N" + 1 was mainly due to single- or first-fixation durations following short previews. We discuss implications for notions of serial attention shifts and parallel distributed processing of words during reading. |
Ming Yan; Wei Zhou; Hua Shu; Reinhold Kliegl Lexical and sublexical semantic preview benefits in Chinese reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 38, no. 4, pp. 1069–1075, 2012. @article{Yan2012b, Semantic processing from parafoveal words is an elusive phenomenon in alphabetic languages, but it has been demonstrated only for a restricted set of noncompound Chinese characters. Using the gaze-contingent boundary paradigm, this experiment examined whether parafoveal lexical and sublexical semantic information was extracted from compound preview characters. Results generalized parafoveal semantic processing to this representative set of Chinese characters and extended the parafoveal processing to radical (sublexical) level semantic information extraction. Implications for notions of parafoveal information extraction during Chinese reading are discussed. |
Jinmian Yang; Keith Rayner; Nan Li; Suiping Wang Is preview benefit from word n + 2 a common effect in reading Chinese? Evidence from eye movements Journal Article In: Reading and Writing, vol. 25, no. 5, pp. 1079–1091, 2012. @article{Yang2012, Although most studies of reading English (and other alphabetic languages) have indicated that readers do not obtain preview benefit from word n + 2, Yang, Wang, Xu, and Rayner (2009) reported evidence that Chinese readers obtain preview benefit from word n + 2. However, this effect may not be common in Chinese because the character prior to the target word in Yang et al.'s experiment was always a very high frequency function word. In the current experiment, we utilized a relatively low frequency word n + 1 to examine whether an n + 2 preview benefit effect would still exist and failed to find any preview benefit from word n + 2. These results are consistent with a recent study which indicated that foveal load modulates the perceptual span during Chinese reading (Yan, Kliegl, Shu, Pan, & Zhou, 2010). Implications of these results for models of eye movement control are discussed. |
Jinmian Yang; Adrian Staub; Nan Li; Suiping Wang; Keith Rayner Plausibility effects when reading one-and two-character words in Chinese: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 38, no. 6, pp. 1801–1809, 2012. @article{Yang2012b, Eye movements of Chinese readers were monitored as they read sentences containing a critical character that was either a 1-character word or the initial character of a 2-character word. Due to manipulation of the verb prior to the target word, the 1-character target word (or the first character of the 2-character target word) was either plausible or implausible, as an independent word, at the point at which it appeared, whereas the 2-character word was always plausible. The eye movement data showed that the plausibility manipulation did not exert an influence on the reading of the 2-character word or its component characters. However, plausibility significantly influenced reading of the 1-character target word. These results suggest that processes of semantic integration in reading Chinese are performed at a word level, instead of a character level, and that word segmentation must take place very early in the course of processing. |
Jinmian Yang; Suiping Wang; Xiuhong Tong; Keith Rayner Semantic and plausibility effects on preview benefit during eye fixations in Chinese reading Journal Article In: Reading and Writing, vol. 25, no. 5, pp. 1031–1052, 2012. @article{Yang2012c, The boundary paradigm (Rayner, 1975) was used to examine whether high level information affects preview benefit during Chinese reading. In two experiments, readers read sentences with a 1-character target word while their eye movements were monitored. In Experiment 1, the semantic relatedness between the target word and the preview word was manipulated so that there were semantically related and unrelated preview words, both of which were not plausible in the sentence context. No significant differences between these two preview conditions were found, indicating no effect of semantic preview. In Experiment 2, we further examined semantic preview effects with plausible preview words. There were four types of previews: identical, related & plausible, unrelated & plausible, and unrelated & implausible. The results revealed a significant effect of plausibility as single fixation and gaze duration on the target region were shorter in the two plausible conditions than in the implausible condition. Moreover, there was some evidence for a semantic preview benefit as single fixation duration on the target region was shorter in the related & plausible condition than the unrelated & plausible condition. Implications of these results for processing of high level information during Chinese reading are discussed. |
Shun-nan Yang; Yu-chi Tai; James E. Sheedy; Beth Kinoshita; Matthew Lampa; Jami R. Kern Comparative effect of lens care solutions on blink rate, ocular discomfort and visual performance Journal Article In: Ophthalmic and Physiological Optics, vol. 32, no. 5, pp. 412–420, 2012. @article{Yang2012a, PURPOSE: To help maintain clear vision and ocular surface health, eye blinks occur to distribute natural tears over the ocular surface, especially the corneal surface. Contact lens wearers may suffer from poor vision and dry eye symptoms due to difficulty in lens surface wetting and reduced tear production. Sustained viewing of a computer screen reduces eye blinks and exacerbates such difficulties. The present study evaluated the wetting effect of lens care solutions (LCSs) on blink rate, dry eye symptoms, and vision performance. METHODS: Sixty-five adult habitual soft contact lens wearers were recruited to adapt to different LCSs (Opti-free, ReNu, and ClearCare) in a cross-over design. Blink rate in pictorial viewing and reading (measured with an eyetracker), dry eye symptoms (measured with the Ocular Surface Disease Index questionnaire), and visual discrimination (identifying tumbling E) immediately before and after eye blinks were measured after 2 weeks of adaption to LCS. Repeated measures anova and mixed model ancova were conducted to evaluate effects of LCS on blink rate, symptom score, and discrimination accuracy.$backslash$n$backslash$nRESULTS: Opti-Free resulted in lower dry eye symptoms (p = 0.018) than ClearCare, and lower spontaneous blink rate (measured in picture viewing) than ClearCare (p = 0.014) and ReNu (p = 0.041). In reading, blink rate was higher for ClearCare compared to ReNu (p = 0.026) and control (p = 0.024). Visual discrimination time was longer for the control (daily disposable lens) than for Opti-Free (p = 0.007), ReNu (p = 0.009), and ClearCare (0.013) immediately before the blink.$backslash$n$backslash$nCONCLUSIONS: LCSs differently affected blink rate, subjective dry eye symptoms, and visual discrimination speed. Those with wetting agents led to significantly fewer eye blinks while affording better ocular comfort for contact lens wearers, compared to that without. LCSs with wetting agents also resulted in better visual performance compared to wearing daily disposable contact lenses. These presumably are because of improved tear film quality. |
Zhou Yang; Todd Jackson; Xiao Gao; Hong Chen Identifying selective visual attention biases related to fear of pain by tracking eye movements within a dot-probe paradigm Journal Article In: Pain, vol. 153, no. 8, pp. 1742–1748, 2012. @article{Yang2012d, This research examined selective biases in visual attention related to fear of pain by tracking eye movements (EM) toward pain-related stimuli among the pain-fearful. EM of 21 young adults scoring high on a fear of pain measure (H-FOP) and 20 lower-scoring (L-FOP) control participants were measured during a dot-probe task that featured sensory pain-neutral, health catastrophe-neutral and neutral-neutral word pairs. Analyses indicated that the H-FOP group was more likely to direct immediate visual attention toward sensory pain and health catastrophe words than was the L-FOP group. The H-FOP group also had comparatively shorter first fixation latencies toward sensory pain and health catastrophe words. Conversely, groups did not differ on EM indices of attentional maintenance (i.e., first fixation duration, gaze duration, and average fixation duration) or reaction times to dot probes. Finally, both groups showed a cycle of disengagement followed by re-engagement toward sensory pain words relative to other word types. In sum, this research is the first to reveal biases toward pain stimuli during very early stages of visual information processing among the highly pain-fearful and highlights the utility of EM tracking as a means to evaluate visual attention as a dynamic process in the context of FOP. |
Miao-Hsuan Yen; Ralph Radach; Ovid J. L. Tzeng; Jie-Li Tsai Usage of statistical cues for word boundary in reading Chinese sentences Journal Article In: Reading and Writing, vol. 25, no. 5, pp. 1007–1029, 2012. @article{Yen2012, The present study examined the use of statistical cues for word boundaries during Chinese reading. Participants were instructed to read sentences for comprehension with their eye movements being recorded. A two-character target word was embedded in each sentence. The contrast between the probabilities of the ending character (C2) of the target word (C12) being used as word beginning and ending in all words containing it was manipulated. In addition, by using the boundary paradigm, parafoveal overlapping ambiguity in the string C123 was manipulated with three types of preview of the character C3, which was a single-character word in the identical condition. During preview, the combination of C23′ was a legal word in the ambiguous condition and was not a word in the control condition. Significant probability and preview effects were observed. In the low-probability condition, inconsistency in the frequent within-word position (word beginning) and the present position (word ending) lengthened gaze durations and increased refixation rate on the target word. Although benefits from the identical previews were apparent, effects of overlapping ambiguity were negligible. The results suggest that the probability of within-word positions had an influence during character-to-word assignment, which was mainly verified during foveal processing. Thus, the overlapping ambiguity between parafoveal words did not interfere with reading. Further investigation is necessary to examine whether current computational models of eye movement control should incorporate statistical cues for word boundaries together with other linguistic factors in their word processing system to account for Chinese reading. © 2011 Springer Science+Business Media B.V. |
Felicity D. A. Wolohan; Trevor J. Crawford The anti-orienting phenomenon revisited: Effects of gaze cues on antisaccade performance Journal Article In: Experimental Brain Research, vol. 221, no. 4, pp. 385–392, 2012. @article{Wolohan2012, When the eye gaze of a face is congruent with direction of an upcoming target, saccadic eye movements of the observer towards that target are generated more quickly, in comparison to eye gaze incongruent with the direction of the target. This work examined the conflict in an antisaccade task, when eye gaze points towards the target, but the saccadic eye movement should be triggered in the opposite direction. In a gaze cueing paradigm a central face provided an attentional gaze cue towards the target or away from the target. Participants (N = 38) generated pro- and anti- saccades to peripheral targets that were congruent or incongruent with the previous gaze cue. Paradoxically, facilitatory effects of a gaze cue towards the target were observed for both the pro- and anti- saccade tasks. The results are consistent with the idea that eye gaze cues are processed in the task set that is compatible with the saccade programme. Thus, in an antisaccade paradigm participants may anti-orient with respect to the gaze cue resulting in faster saccades on trials when the gaze cue is towards the target. The results resemble a previous observation by Fischer and Weber (1996) using low level peripheral cues. The current study extends this finding to include central socially communicative cues. |
Luke Woloszyn; David L. Sheinberg Effects of long-term visual experience on responses of distinct classes of single units in inferior temporal cortex Journal Article In: Neuron, vol. 74, no. 1, pp. 193–205, 2012. @article{Woloszyn2012, Primates can learn to recognize a virtually limitless number of visual objects. A candidate neural substrate for this adult plasticity is the inferior temporal cortex (ITC). Using a large stimulus set, we explored the impact that long-term experience has on the response properties of two classes of neurons in ITC: broad-spiking (putative excitatory) cells and narrow-spiking (putative inhibitory) cells. We found that experience increased maximum responses of putative excitatory neurons but had the opposite effect on maximum responses of putative inhibitory neurons, an observation that helps to reconcile contradictory reports regarding the presence and direction of this effect. In addition, we found that experience reduced the average stimulus-evoked response in both cell classes, but this decrease was much more pronounced in putative inhibitory units. This latter finding supports a potentially critical role of inhibitory neurons in detecting and initiating the cascade of events underlying adult neural plasticity in ITC. Woloszyn and Sheinberg show that long-term visual experience enhances activity of putative excitatory neurons in inferior temporal cortex but decreases global activity of putative inhibitory neurons. These findings help reconcile conflicting views of the influence of visual experience on activity in IT cortex. |
Daw-An Wu; Shinsuke Shimojo; Stephanie W. Wang; Colin F. Camerer Shared visual attention reduces hindsight bias Journal Article In: Psychological Science, vol. 23, no. 12, pp. 1524–1533, 2012. @article{Wu2012, Hindsight bias is the tendency to retrospectively think of outcomes as being more foreseeable than they actually were. It is a robust judgment bias and is difficult to correct (or "debias"). In the experiments reported here, we used a visual paradigm in which performers decided whether blurred photos contained humans. Evaluators, who saw the photos unblurred and thus knew whether a human was present, estimated the proportion of participants who guessed whether a human was present. The evaluators exhibited visual hindsight bias in a way that matched earlier data from judgments of historical events surprisingly closely. Using eye tracking, we showed that a higher correlation between the gaze patterns of performers and evaluators (shared attention) is associated with lower hindsight bias. This association was validated by a causal method for debiasing: Showing the gaze patterns of the performers to the evaluators as they viewed the stimuli reduced the extent of hindsight bias. |
Lingdan Wu; Jie Pu; John J. B. Allen; Paul Pauli Recognition of facial expressions in individuals with elevated levels of depressive symptoms: An eye-movement study Journal Article In: Depression Research and Treatment, pp. 7, 2012. @article{Wu2012a, Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye tracking. Forty participants (18 with elevated depressive symptoms) were instructed to label facial expressions depicting one of seven emotions. Results showed that the high-depression group, in comparison with the low-depression group, recognized facial expressions faster and with comparable accuracy. Furthermore, the high-depression group demonstrated greater leftwards attention bias which has been argued to be an indicator of hyperactivation of right hemisphere during facial expression recognition. |
Brad Wyble; Mary C. Potter; Marcelo Mattar RSVP in orbit: Identification of single and dual targets in motion Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 3, pp. 553–562, 2012. @article{Wyble2012, Predicting the binding mode of flexible polypeptides to proteins is an important task that falls outside the domain of applicability of most small molecule and protein−protein docking tools. Here, we test the small molecule flexible ligand docking program Glide on a set of 19 non-α-helical peptides and systematically improve pose prediction accuracy by enhancing Glide sampling for flexible polypeptides. In addition, scoring of the poses was improved by post-processing with physics-based implicit solvent MM- GBSA calculations. Using the best RMSD among the top 10 scoring poses as a metric, the success rate (RMSD ≤ 2.0 Å for the interface backbone atoms) increased from 21% with default Glide SP settings to 58% with the enhanced peptide sampling and scoring protocol in the case of redocking to the native protein structure. This approaches the accuracy of the recently developed Rosetta FlexPepDock method (63% success for these 19 peptides) while being over 100 times faster. Cross-docking was performed for a subset of cases where an unbound receptor structure was available, and in that case, 40% of peptides were docked successfully. We analyze the results and find that the optimized polypeptide protocol is most accurate for extended peptides of limited size and number of formal charges, defining a domain of applicability for this approach. |
Yangqing Xu; Steven L. Franconeri The head of the table: Marking the "front" of an object is tightly linked with selection Journal Article In: Journal of Neuroscience, vol. 32, no. 4, pp. 1408–1412, 2012. @article{Xu2012, Objects in the world do not have a surface that can be objectively labeled the "front." We impose this designation on one surface of an object according to several cues, including which surface is associated with the most task-relevant information or the direction of motion of an object. However, when these cues are competing, weak, or absent, we can also flexibly assign one surface as the front. One possibility is that this assignment is guided by the location of the "spotlight" of selection, where the selected region becomes the front. Here we used an electrophysiological correlate to show a direct temporal link between object structure assignments and the spatial locus of selection. We found that when human participants viewed a shape whose front and back surfaces were ambiguous, seeing a given surface as front was associated with selectively attending to that location. In Experiment 1, this pattern occurred during directed rapid (every 1 s) switches in structural percepts. In Experiment 2, this pattern occurred during spontaneous reversals, from 900 ms before to 600 ms after the reported percept. These results suggest that the distribution of selective attention might guide the organization of object structure. |
Kenji Yokoi; Katsumi Watanabe; Shinya Saida Rapid and implicit effects of color category on visual search Journal Article In: Optical Review, vol. 19, no. 4, pp. 276–281, 2012. @article{Yokoi2012, Many studies suggest that the color category influences visual perception. It is also well known that oculomotor control and visual attention are closely linked. In order to clarify temporal characteristics of color categorization, we investigated eye movements during color visual search. Eight color disks were presented briefly for 20–320 ms, and the subject was instructed to gaze at a target shown prior to the trial. We found that the color category of the target modulated eye movements significantly when the stimulus was displayed for more than 40 ms and the categorization could be completed within 80 ms. With the 20 ms presentation, the search performance was at a chance level, however, the first saccadic latency suggested that the color category had an effect on visual attention. These results suggest that color categorization affects the guidance of visual attention rapidly and implicitly. |
Takemasa Yokoyama; Yasuki Noguchi; Shinichi Kita Attentional shifts by gaze direction in voluntary orienting: evidence from a microsaccade study Journal Article In: Experimental Brain Research, vol. 223, no. 2, pp. 291–300, 2012. @article{Yokoyama2012, Shifts in spatial attention can be induced by the gaze direction of another. However, it is unclear whether gaze direction influences the allocation of attention by reflexive or voluntary orienting. The present study was designed to examine which type of attentional orienting is elicited by gaze direction. We conducted two experiments to answer this question. In Experiment 1, we used a modified Posner paradigm with gaze cues and measured microsaccades to index the allocation of attention. We found that microsaccade direction followed cue direction between 200 and 400 ms after gaze cues were presented. This is consistent with the latencies observed in other microsaccade studies in which voluntary orienting is manipulated, suggesting that gaze direction elicits voluntary orienting. However, Experiment 1 did not separate voluntary and reflexive orienting directionally, so in Experiment 2, we used an anticue task in which cue direction (direction to allocate attention) was the opposite of gaze direction (direction of gaze in depicted face). The results in Experiment 2 were consistent with those from Experiment 1. Microsaccade direction followed the cue direction, not gaze direction. Taken together, these results indicate that the shift in spatial attention elicited by gaze direction is voluntary orienting. |
Kyoko Yoshida; Nobuhito Saito; Atsushi Iriki; Masaki Isoda Social error monitoring in macaque frontal cortex Journal Article In: Nature Neuroscience, vol. 15, no. 9, pp. 1307–1312, 2012. @article{Yoshida2012, Although much learning occurs through direct experience of errors, humans and other animals can learn from the errors of other individuals. The medial frontal cortex (MFC) processes self-generated errors, but the neuronal architecture and mechanisms underlying the monitoring of others' errors are poorly understood. Exploring such mechanisms is important, as they underlie observational learning and allow adaptive behavior in uncertain social environments. Using two paired monkeys that monitored each other's action for their own action selection, we identified a group of neurons in the MFC that exhibited a substantial activity increase that was associated with another's errors. Nearly half of these neurons showed activity changes consistent with general reward-omission signals, whereas the remaining neurons specifically responded to another's erroneous actions. These findings indicate that the MFC contains a dedicated circuit for monitoring others' mistakes during social interactions. |
Gregory J. Zelinsky TAM: Explaining off-object fixations and central fixation tendencies as effects of population averaging during search Journal Article In: Visual Cognition, vol. 20, no. 4-5, pp. 515–545, 2012. @article{Zelinsky2012a, Understanding how patterns are selected for both recognition and action, in the form of an eye movement, is essential to understanding the mechanisms of visual search. It is argued that selecting a pattern for fixation is time consuming-requiring the pruning of a population of possible saccade vectors to isolate the specific movement to the potential target. To support this position, two experiments are reported showing evidence for off-object fixations, where fixations land between objects rather than directly on objects, and central fixations, where initial saccades land near the center of scenes. Both behaviors were modeled successfully using TAM (Target Acquisition Model; Zelinsky, 2008). TAM interprets these behaviors as expressions of population averaging occurring at different times during saccade target selection. A large population early during search results in the averaging of the entire scene and a central fixation; a smaller population later during search results in averaging between groups of objects and off-object fixations. |
Gregory J. Zelinsky; Yifan Peng; Alexander C. Berg; Dimitris Samaras Modeling guidance and recognition in categorical search: Bridging human and computer object detection Journal Article In: Journal of Vision, vol. 12, no. 9, pp. 957–957, 2012. @article{Zelinsky2012, Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. |
Yang Zhou; Yining Liu; Wangzikang Zhang; Mingsha Zhang Asymmetric influence of egocentric representation onto allocentric perception Journal Article In: Journal of Neuroscience, vol. 32, no. 24, pp. 8354–8360, 2012. @article{Zhou2012, Objects in the visual world can be represented in both egocentric and allocentric coordinates. Previous studies have found that allocentric representation can affect the accuracy of spatial judgment relative to an egocentric frame, but not vice versa. Here we asked whether egocentric representation influenced the processing speed of allocentric perception. We measured the manual reaction time of human subjects in a position discrimination task in which the behavioral response purely relied on the target's allocentric location, independent of its egocentric position. We used two conditions of stimulus location: the compatible condition-allocentric left and egocentric left or allocentric right and egocentric right; the incompatible condition-allocentric left and egocentric right or allocentric right and egocentric left. We found that egocentric representation markedly influenced allocentric perception in three ways. First, in a given egocentric location, allocentric perception was significantly faster in the compatible condition than in the incompatible condition. Second, as the target became more eccentric in the visual field, the speed of allocentric perception gradually slowed down in the incompatible condition but remained unchanged in the compatible condition. Third, egocentric-allocentric incompatibility slowed allocentric perception more in the left egocentric side than the right egocentric side. These results cannot be explained by interhemispheric visuomotor transformation and stimulus-response compatibility theory. Our findings indicate that each hemisphere preferentially processes and integrates the contralateral egocentric and allocentric spatial information, and the right hemisphere receives more ipsilateral egocentric inputs than left hemisphere does. |
Eckart Zimmermann; M. Concetta Morrone; David C. Burr Visual motion distorts visual and motor space Journal Article In: Journal of Vision, vol. 12, no. 2, pp. 10–10, 2012. @article{Zimmermann2012, Much evidence suggests that visual motion can cause severe distortions in the perception of spatial position. In this study, we show that visual motion also distorts saccadic eye movements. Landing positions of saccades performed to objects presented in the vicinity of visual motion were biased in the direction of motion. The targeting errors for both saccades and perceptual reports were maximum during motion onset and were of very similar magnitude under the two conditions. These results suggest that visual motion affects a representation of spatial position, or spatial map, in a similar fashion for visuomotor action as for perception. |
Heng Zou; Hermann J. Muller; Zhuanghua Shi Non-spatial sounds regulate eye movements and enhance visual search Journal Article In: Journal of Vision, vol. 12, no. 5, pp. 2–2, 2012. @article{Zou2012, Spatially uninformative sounds can enhance visual search when the sounds are synchronized with color changes of the visual target, a phenomenon referred to as "pip-and-pop" effect (van der Burg, Olivers, Bronkhorst, & Theeuwes, 2008). The present study investigated the relationship of this effect to changes in oculomotor scanning behavior induced by the sounds. The results revealed sound events to increase fixation durations upon their occurrence and to decrease the mean number of saccades. More specifically, spatially uninformative sounds facilitated the orientation of ocular scanning away from already scanned display regions not containing a target (Experiment 1) and enhanced search performance even on target-absent trials (Experiment 2). Facilitation was also observed when the sounds were presented 100 ms prior to the target or at random (Experiment 3). These findings suggest that non-spatial sounds cause a general freezing effect on oculomotor scanning behavior, an effect which in turn benefits visual search performance by temporally and spatially extended information sampling. |
Wietske Zuiderbaan; Ben M. Harvey; Serge O. Dumoulin Modeling center – surround configurations in population receptive fields using fMRI Journal Article In: Journal of Vision, vol. 12, no. 3, pp. 1–15, 2012. @article{Zuiderbaan2012, Antagonistic center–surround configurations are a central organizational principle of our visual system. In visual cortex, stimulation outside the classical receptive field can decrease neural activity and also decrease functional Magnetic Resonance Imaging (fMRI) signal amplitudes. Decreased fMRI amplitudes below baseline—0% contrast—are often referred to as “negative” responses. Using neural model-based fMRI data analyses, we can estimate the region of visual space to which each cortical location responds, i.e., the population receptive field (pRF). Current models of the pRF do not account for a center–surround organization or negative fMRI responses. Here, we extend the pRF model by adding surround suppression. Where the conventional model uses a circular symmetric Gaussian function to describe the pRF, the new model uses a circular symmetric difference-of-Gaussians (DoG) function. The DoG model allows the pRF analysis to capture fMRI signals below baseline and surround suppression. Comparing the fits of the models, an increased variance explained is found for the DoG model. This improvement was predominantly present in V1/2/3 and decreased in later visual areas. The improvement of the fits was particularly striking in the parts of the fMRI signal below baseline. Estimates for the surround size of the pRF show an increase with eccentricity and over visual areas V1/2/3. For the suppression index, which is based on the ratio between the volumes of both Gaussians, we show a decrease over visual areas V1 and V2. Using non-invasive fMRI techniques, this method gives the possibility to examine assumptions about center–surround receptive fields in human subjects. |
Jan Zwickel; Mathias Hegele; Marc Grosjean Ocular tracking of biological and nonbiological motion: The effect of instructed agency Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 1, pp. 52–57, 2012. @article{Zwickel2012, Recent findings suggest that visuomotor performance is modulated by people's beliefs about the agency (e.g., animate vs. inanimate) behind the events they perceive. This study investigated the effect of instructed agency on ocular tracking of point-light motions with biological and nonbiological velocity profiles. The motions followed either a relatively simple (ellipse) or a more complex (scribble) trajectory, and agency was manipulated by informing the participants that the motions they saw were either human or computer generated. In line with previous findings, tracking performance was better for biological than for nonbiological motions, and this effect was particularly pronounced for the simpler (elliptical) motions. The biological advantage was also larger for the human than for the computer instruction condition, but only for a measure that captured the predictive component of smooth pursuit. These results suggest that ocular tracking is influenced by the internal forward model people choose to adopt. |
Ariel Zylberberg; Pablo Barttfeld; Mariano Sigman The construction of confidence in a perceptual decision Journal Article In: Frontiers in Integrative Neuroscience, vol. 6, pp. 79, 2012. @article{Zylberberg2012, Decision-making involves the selection of one out of many possible courses of action. A decision may bear on other decisions, as when humans seek a second medical opinion before undergoing a risky surgical intervention. These "meta-decisions" are mediated by confidence judgments-the degree to which decision-makers consider that a choice is likely to be correct. We studied how subjective confidence is constructed from noisy sensory evidence. The psychophysical kernels used to convert sensory information into choice and confidence decisions were precisely reconstructed measuring the impact of small fluctuations in sensory input. This is shown in two independent experiments in which human participants made a decision about the direction of motion of a set of randomly moving dots, or compared the brightness of a group of fluctuating bars, followed by a confidence report. The results of both experiments converged to show that: (1) confidence was influenced by evidence during a short window of time at the initial moments of the decision, and (2) confidence was influenced by evidence for the selected choice but was virtually blind to evidence for the non-selected choice. Our findings challenge classical models of subjective confidence-which posit that the difference of evidence in favor of each choice is the seed of the confidence signal. |
Ariel Zylberberg; Manuel Oliva; Mariano Sigman Pupil dilation: A fingerprint of temporal selection during the "Attentional Blink" Journal Article In: Frontiers in Psychology, vol. 3, pp. 316, 2012. @article{Zylberberg2012a, Pupil dilation indexes cognitive events of behavioral relevance, like the storage of information to memory and the deployment of attention. Yet, given the slow temporal response of the pupil dilation, it is not known from previous studies whether the pupil can index cognitive events in the short time scale of ∼100 ms. Here we measured the size of the pupil in the Attentional Blink (AB) experiment, a classic demonstration of attentional limitations in processing rapidly presented stimuli. In the AB, two targets embedded in a sequence have to be reported and the second stimulus is often missed if presented between 200 and 500 ms after the first. We show that pupil dilation can be used as a marker of cognitive processing in AB, revealing both the timing and amount of cognitive processing. Specifically, we found that in the time range where the AB is known to occur: (i) the pupil dilation was delayed, mimicking the pattern of response times in the Psychological Refractory Period (PRP) paradigm, (ii) the amplitude of the pupil was reduced relative to that of larger lags, even for correctly identified targets, and (iii) the amplitude of the pupil was smaller for missed than for correctly reported targets. These results support two-stage theories of the Attentional Blink where a second processing stage is delayed inside the interference regime, and indicate that the pupil dilation can be used as a marker of cognitive processing in the time scale of ∼100 ms. Furthermore, given the known relation between the pupil dilation and the activity of the locus coeruleus, our results also support theories that link the serial stage to the action of a specific neuromodulator, norepinephrine. |
Hang Zhang; Camille Morvan; Louis Alexandre Etezad-Heydari; Laurence T. Maloney Very slow search and reach: Failure to maximize expected gain in an eye-hand coordination task Journal Article In: PLoS Computational Biology, vol. 8, no. 10, pp. e1002718, 2012. @article{Zhang2012a, We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt. |
Hao Zhang; Hong-Mei Yan; Keith M. Kendrick; Chao-Yi Li Both lexical and non-lexical characters are processed during saccadic eye movements Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e46383, 2012. @article{Zhang2012, On average our eyes make 3-5 saccadic movements per second when we read, although their neural mechanism is still unclear. It is generally thought that saccades help redirect the retinal fovea to specific characters and words but that actual discrimination of information only occurs during periods of fixation. Indeed, it has been proposed that there is active and selective suppression of information processing during saccades to avoid experience of blurring due to the high-speed movement. Here, using a paradigm where a string of either lexical (Chinese) or non-lexical (alphabetic) characters are triggered by saccadic eye movements, we show that subjects can discriminate both while making saccadic eye movement. Moreover, discrimination accuracy is significantly better for characters scanned during the saccadic movement to a fixation point than those not scanned beyond it. Our results showed that character information can be processed during the saccade, therefore saccades during reading not only function to redirect the fovea to fixate the next character or word but allow pre-processing of information from the ones adjacent to the fixation locations to help target the next most salient one. In this way saccades can not only promote continuity in reading words but also actively facilitate reading comprehension. |
Jun-Yun Zhang; Gong-Liang Zhang; Lei Liu; Cong Yu Whole report uncovers correctly identified but incorrectly placed target information under visual crowding Journal Article In: Journal of Vision, vol. 12, no. 7, pp. 1–11, 2012. @article{Zhang2012b, Multiletter identification studies often find correctly identified letters being reported in wrong positions. However, how position uncertainty impacts crowding in peripheral vision is not fully understood. The observation of a flanker being reported as the central target cannot be taken as unequivocal evidence for position misperception because the observers could be biased to report a more identifiable flanker when failing to identify the central target. In addition, it has never been reported whether a correctly identified central target can be perceived at a flanker position under crowding. Empirical investigation into this possibility holds the key to demonstrating letter-level position uncertainty in crowding, because the position errors of the least identifiable central target cannot be attributed to response bias. We asked normally-sighted observers to report either the central target of a trigram (partial report) or all three characters (whole report). The results showed that, for radially arranged trigrams, the rate of reporting the central target regardless of the reported position in the whole report was significantly higher than the partial report rate, and the extra target reports mostly ended up in flanker positions. Error analysis indicated that target-flanker position swapping and misalignment (lateral shift of the target and one flanker) underlay this target misplacement. Our results thus establish target misplacement as a source of crowding errors and ascertain the role of letter-level position uncertainty in crowding. |
Gu Zhao; Qiang Liu; Jun Jiao; Peiling Zhou; Hong Li; Hong-jin Sun Dual-state modulation of the contextual cueing effect: Evidence from eye movement recordings Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 11–11, 2012. @article{Zhao2012, The repeated configurations of random elements induce a better search performance than that of the displays of novel random configurations. The mechanism of such contextual cueing effect has been investigated through the use of the RT $backslash$texttimes Set Size function. There are divergent views on whether the contextual cueing effect is driven by attentional guidance or facilitation of initial perceptual processing or response selection. To explore this question, we used eye movement recording in this study, which offers information about the substages of the search task. The results suggest that the contextual cueing effect is contributed mainly by attentional guidance, and facilitation of response selection also plays a role. |
Jifan Zhou; Chia-Lin Lee; Su-Ling Yeh Semantic priming from crowded words Journal Article In: Psychological Science, vol. 23, no. 6, pp. 608–616, 2012. @article{Zhou2012a, Vision in a cluttered scene is extremely inefficient. This damaging effect of clutter, known as crowding, affects many aspects of visual processing (e.g., reading speed). We examined observers' processing of crowded targets in a lexical decision task, using single-character Chinese words that are compact but carry semantic meaning. Despite being unrecognizable and indistinguishable from matched nonwords, crowded prime words still generated robust semantic-priming effects on lexical decisions for test words presented in isolation. Indeed, the semantic-priming effect of crowded primes was similar to that of uncrowded primes. These findings show that the meanings of words survive crowding even when the identities of the words do not, suggesting that crowding does not prevent semantic activation, a process that may have evolved in the context of a cluttered visual environment. |
Peng Zhou; Stephen Crain; Likan Zhan Sometimes children are as good as adults – The pragmatic use of prosody in children's on-line sentence processing Journal Article In: Journal of Memory and Language, vol. 67, no. 8, pp. 149–164, 2012. @article{Zhou2012b, This study examined 4-year-old Mandarin-speaking children's sensitivity to prosodic cues in resolving speech act ambiguities, using eye-movement recordings. Most previous on- line studies have focused on children's use of prosody in resolving structural ambiguities. Although children have been found to be sensitive to prosodic information, they use such information less effectively than adults in on-line sentence processing. The present study takes advantage of special properties of Mandarin Chinese to investigate the role of pros- ody in children's on-line processing of ambiguities in which prosody serves to signal the illocutionary meaning of an utterance (i.e., whether the speaker is asking a question or making a statement). We found that the effect of prosody in this case was as robust in chil- dren as it was in adults. This suggests that children are as sensitive as adults in using pros- ody in on-line sentence processing, when prosody is used to resolve a pragmatic ambiguity. |
Peng Zhou; Yi Su; Stephen Crain; Liqun Gao; Likan Zhan Children's use of phonological information in ambiguity resolution: A view from Mandarin Chinese Journal Article In: Journal of Child Language, vol. 39, no. 4, pp. 687–730, 2012. @article{Zhou2012c, How do children develop the mapping between prosody and other levels of linguistic knowledge? This question has received considerable attention in child language research. In the present study two experiments were conducted to investigate four- to five-year-old Mandarin-speaking children's sensitivity to prosody in ambiguity resolution. Experiment 1 used eye-tracking to assess children's use of stress in resolving structural ambiguities. Experiment 2 took advantage of special properties of Mandarin to investigate whether children can use intonational cues to resolve ambiguities involving speech acts. The results of our experiments show that children's use of prosodic information in ambiguity resolution varies depending on the type of ambiguity involved. Children can use prosodic information more effectively to resolve speech act ambiguities than to resolve structural ambiguities. This finding suggests that the mapping between prosody and semantics/pragmatics in young children is better established than the mapping between prosody and syntax. |
Daniel P. Blakely; Timothy J. Wright; Vincent M. Dehili; Walter R. Boot; James R. Brockmole Characterizing the time course and nature of attentional disengagement effects Journal Article In: Vision Research, vol. 56, pp. 38–48, 2012. @article{Blakely2012, Visual features of fixated but irrelevant items contribute to both how long overt attention dwells at a location and to decisions regarding the location of subsequent attention shifts (Boot & Brockmole, 2010; Brockmole & Boot, 2009). Fixated but irrelevant search items that share the color of the search target delay the deployment of attention. Furthermore, eye movements are biased to distractors that share the color of the currently fixated item. We present a series of experiments that examined these effects in depth. Experiment 1 explored the time course of disengagement effects. Experiments 2 and 3 explored the generalizability of disengagement effects by testing whether they could be observed when participants searched for targets defined by form instead of color. Finally, Experiment 4 validated the disengagement paradigm as a measure of disengagement and ruled out alternative explanations for slowed saccadic reaction times. Results confirm and extend our understanding of the influence of features within the focus of attention on when and where attention will shift next. |
Richard S. Bogartz; Adrian Staub Gaze step distributions reflect fixations and saccades: A comment on Stephen and Mirman (2010) Journal Article In: Cognition, vol. 123, no. 2, pp. 325–334, 2012. @article{Bogartz2012, In three experimental tasks Stephen and Mirman (2010) measured gaze steps, the distance in pixels between gaze positions on successive samples from an eyetracker. They argued that the distribution of gaze steps is best fit by the lognormal distribution, and based on this analysis they concluded that interactive cognitive processes underlie eye movement control in these tasks. The present comment argues that the gaze step distribution is predictable based on the fact that the eyes alternate between a fixation state in which gaze is steady and a saccade state in which gaze position changes rapidly. By fitting a simple mixture model to Stephen and Mirman's gaze step data we reveal a fixation distribution and a saccade distribution. This mixture model captures the shape of the gaze step distribution in detail, unlike the lognormal model, and provides a better quantitative fit to the data. We conclude that the gaze step distribution does not directly suggest processing interaction, and we emphasize some important limits on the utility of fitting theoretical distributions to data. |
Born Born; Ulrich Ansorge; Dirk Kerzel Feature-based effects in the coupling between attention and saccades Journal Article In: Journal of Vision, vol. 12, no. 11, pp. 1–17, 2012. @article{Born2012, Previous research has demonstrated that prior to saccade execution visual attention is imperatively shifted towards the saccade target (e.g., Deubel & Schneider, 1996; Kowler, Anderson, Dosher, & Blaser, 1995). Typically, observers had to make a saccade according to an arrow cue and simultaneously perform a perceptual discrimination task either at the saccade endpoint or elsewhere on the screen. Discrimination performance was poor if the location of the saccade target (ST) and the discrimination target (DT) did not coincide. However, those experiments only investigated shifts of spatial attention. In the current experiments, we examined how feature-based attention is deployed before a saccade. In Experiment 1, we randomly varied the colors of the ST and DT. Results showed that discrimination performance was better when the DT was shown in the same color as the ST. This color congruency effect was slightly larger and more reliable when ST color was relevant and constant across trials (Experiment 2). We conclude that selection of a colored ST can induce display-wide facilitative processing of stimuli sharing this color. Results are discussed in terms of saccade programming and saccade selection, color priming in visual search, color cuing, and color-based top-down contingent attentional capture. We also discuss basic mechanisms of spatial- and feature-based attention and predictive remapping of visual information across saccades. |