EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2012 |
Barry Dauphin; Harold H. Greene Here's looking at you: Eye movement exploration of rorschach images Journal Article In: Rorschachiana, vol. 33, no. 1, pp. 3–22, 2012. @article{Dauphin2012, This study represents the beginning of a systematic effort to utilize eye-movement responses in order to better understand individuals' processing strategies during the Rorschach Inkblot Method (RIM). Eye movements ref lect moment-by-moment spatial and temporal processing of visual information and represent a useful approach for studying the RIM with potential clinical implications. Thirteen participants responded to the Rorschach while eye movements were being monitored. Several eye-movement indices were studied which ref lect different aspects of information processing. Differences among the Rorschach cards were found for several eye-movement indices. For exam-ple, fixation durations were longer during a second viewing of the cards than during the first. This is consonant with an attempt to acquire conceptually difficult information, as participants were reinterpreting the cards. Results are discussed in terms of visual information processing strategies during the RIM and the potential usefulness of eye movements as a response measure to the RIM. |
Marco Davare; A. Zénon; Gilles Pourtois; Michel Desmurget; Etienne Olivier Role of the medial part of the intraparietal sulcus in implementing movement direction Journal Article In: Cerebral Cortex, vol. 22, no. 6, pp. 1382–1394, 2012. @article{Davare2012, The contribution of the posterior parietal cortex (PPC) to visually guided movements has been originally inferred from observations made in patients suffering from optic ataxia. Subsequent electrophysiological studies in monkeys and functional imaging data in humans have corroborated the key role played by the PPC in sensorimotor transformations underlying goal-directed movements, although the exact contribution of this structure remains debated. Here, we used transcranial magnetic stimulation (TMS) to interfere transiently with the function of the left or right medial part of the intraparietal sulcus (mIPS) in healthy volunteers performing visually guided movements with the right hand. We found that a "virtual lesion" of either mIPS increased the scattering in initial movement direction (DIR), leading to longer trajectory and prolonged movement time, but only when TMS was delivered 100-160 ms before movement onset and for movements directed toward contralateral targets. Control experiments showed that deficits in DIR consequent to mIPS virtual lesions resulted from an inappropriate implementation of the motor command underlying the forthcoming movement and not from an inaccurate computation of the target localization. The present study indicates that mIPS plays a causal role in implementing specifically the direction vector of visually guided movements toward objects situated in the contralateral hemifield. |
Ivar A. H. Clemens; Luc P. J. Selen; Mathieu Koppen; W. Pieter Medendorp Visual stability across combined eye and body motion Journal Article In: Journal of Vision, vol. 12, no. 12, pp. 1–11, 2012. @article{Clemens2012, In order to maintain visual stability during self-motion, the brain needs to update any egocentric spatial representations of the environment. Here, we use a novel psychophysical approach to investigate how and to what extent the brain integrates visual, extraocular, and vestibular signals pertaining to this spatial update. Participants were oscillated sideways at a frequency of 0.63 Hz while keeping gaze fixed on a stationary light. When the motion direction changed, a reference target was shown either in front of or behind the fixation point. At the next reversal, half a cycle later, we tested updating of this reference location by asking participants to judge whether a briefly flashed probe was shown to the left or right of the memorized target. We show that updating is not only biased, but that the direction and magnitude of this bias depend on both gaze and object location, implying that a gaze-centered reference frame is involved. Using geometric modeling, we further show that the gaze-dependent errors can be caused by an underestimation of translation amplitude, by a bias of visually perceived objects towards the fovea (i.e., a foveal bias), or by a combination of both. |
Charles Jr. Clifton; Lyn Frazier Interpreting conjoined noun phrases and conjoined clauses: Collective versus distributive preferences Journal Article In: Quarterly Journal of Experimental Psychology, vol. 65, no. 9, pp. 1760–1776, 2012. @article{Clifton2012, Two experiments are reported that show that introducing event participants in a conjoined noun phrase (NP) favours a single event (collective) interpretation, while introducing them in separate clauses favours a separate events (distributive) interpretation. In Experiment 1, acceptability judgements were speeded when the bias ofa predicate toward separate events versus a single event matched the pre- sumed bias of how the subjects' referents were introduced (as conjoined noun phrases or in conjoined clauses). In Experiment 2, reading ofa phrase containing an anaphor following conjoined noun phrases was facilitated when the anaphor was they, relative to when it was neither/each ofthem; the opposite pattern was found when the anaphor followed conjoined clauses. We argue that comprehension was facilitated when the form of an anaphor was appropriate for how its antecedents were introduced. These results address the very general problem of how we individuate entities and events when pre- sented with a complex situation and show that different linguistic forms can guide how we construe a situation. The results also indicate that there is no general penalty for introducing the entities or events separately—in distinct clauses as “split” antecedents. |
Sébastien Coppe; Jean-Jacques Orban de Xivry; Demet Yuksel; Adrian Ivanoiu; Philippe Lefevre Dramatic impairment of prediction due to frontal lobe degeneration Journal Article In: Journal of Neurophysiology, vol. 108, no. 11, pp. 2957–2966, 2012. @article{Coppe2012, Prediction is essential for motor function in everyday life. For instance, predictive mechanisms improve the perception of a moving target by increasing eye speed anticipatively, thus reducing motion blur on the retina. Subregions of the frontal lobes play a key role in eye movements in general and in smooth pursuit in particular, but their precise function is not firmly established. Here, the role of frontal lobes in the timing of predictive action is demonstrated by studying predictive smooth pursuit during transient blanking of a moving target in mild frontotemporal lobar degeneration (FTLD) and Alzheimer's disease (AD) patients. While control subjects and AD patients predictively reaccelerated their eyes before the predicted time of target reappearance, FTLD patients did not. The difference was so dramatic (classification accuracy ⬎90%) that it could even lead to the definition of a new biomarker. In contrast, anticipatory eye movements triggered by the disappearance of the fixation point were still present before target motion onset in FTLD patients and visually guided pursuit was normal in both patient groups compared with controls. Therefore, FTLD patients were only impaired when the predicted timing of an external event was required to elicit an action. These results argue in favor of a role of the frontal lobes in predictive movement timing. |
Antoine Coutrot; Nathalie Guyader; Gelu Ionescu; Alice Caplier Influence of soundtrack on eye movements during video exploration Journal Article In: Journal of Eye Movement Research, vol. 5, no. 4, pp. 1–10, 2012. @article{Coutrot2012, Models of visual attention rely on visual features such as orientation, intensity or motion to predict which regions of complex scenes attract the gaze of observers. So far, sound has never been considered as a possible feature that might influence eye movements. Here, we evaluate the impact of non-spatial sound on the eye movements of observers watching videos. We recorded eye movements of 40 participants watching assorted videos with and without their related soundtracks. We found that sound impacts on eye position, fixation duration and saccade amplitude. The effect of sound is not constant across time but becomes significant around one second after the beginning of video shots. |
Christopher D. Cowper-Smith; Gail A. Eskes; David A. Westwood Saccadic inhibition of return can arise from late-stage execution processes Journal Article In: Neuroscience Letters, vol. 531, no. 2, pp. 120–124, 2012. @article{CowperSmith2012, Inhibition of return (IOR) is thought to improve the efficiency of visual search behaviour by biasing attention, eye movements, or both, toward novel stimuli. Previous research suggests that IOR might arise from early sensory, attentional or motor programming processes. In the present study, we were interested in determining if IOR could instead arise from processes operating at or during response execution, independent from effects on earlier processes. Participants made consecutive saccades (from a common starting location) to central arrowhead stimuli. We removed the possible contribution of early sensory/attentional and motor preparation effects in IOR by allowing participants to fully prepare their responses in advance of an execution signal. When responses were prepared in advance, we continued to observe IOR. Our data therefore provide clear evidence that saccadic IOR can result from an execution bias that might arise from inhibitory effects on motor output neurons, or alternatively from late attentional engagement processes. |
Abbie L. Coy; Samuel B. Hutton The influence of extrafoveal emotional faces on prosaccade latencies Journal Article In: Visual Cognition, vol. 20, no. 8, pp. 883–901, 2012. @article{Coy2012, Across three experiments we sought to determine whether extrafoveally presented emotional faces are processed sufficiently rapidly to influence saccade programming. Two rectangular targets containing a neutral and an emotional face were presented either side of a central fixation cross. Participants made prosaccades towards an abrupt luminosity change to the border of one of the rectangles. The faces appeared 150 ms before or simultaneously with the cue. Saccades were faster towards cued rectangles containing emotional compared to neutral faces even when the rectangles were positioned 12 degrees from the fixation cross. When faces were inverted, the facilitative effect of emotion only emerged in the ?150 ms SOA condition, possibly reflecting a shift from configural to featural face processing. Together the results suggest that the human brain is highly specialized for processing emotional information and responds very rapidly to the brief presentation of expressive faces, even when these are located outside foveal vision. |
Joost C. Dessing; Patrick A. Byrne; Armin Abadeh; J. Douglas Crawford Hand-related rather than goal-related source of gaze-dependent errors in memory-guided reaching Journal Article In: Journal of Vision, vol. 12, no. 11, pp. 1–8, 2012. @article{Dessing2012, Mechanisms for visuospatial cognition are often inferred directly from errors in behavioral reports of remembered target direction. For example, gaze-centered target representations for reach were first inferred from reach overshoots of target location relative to gaze. Here, we report evidence for the hypothesis that these gaze-dependent reach errors stem predominantly from misestimates of hand rather than target position, as was assumed in all previous studies. Subjects showed typical gaze-dependent overshoots in complete darkness, but these errors were entirely suppressed by continuous visual feedback of the finger. This manipulation could not affect target representations, so the suppressed gaze-dependent errors must have come from misestimates of hand position, likely arising in a gaze-dependent transformation of hand position signals into visual coordinates. This finding has broad implications for any task involving localization of visual targets relative to unseen limbs, in both healthy individuals and patient populations, and shows that response-related transformations cannot be ignored when deducing the sources of gaze-related errors. |
Joost C. Dessing; Frédéric P. Rey; Peter J. Beek Gaze fixation improves the stability of expert juggling Journal Article In: Experimental Brain Research, vol. 216, no. 4, pp. 635–644, 2012. @article{Dessing2012a, Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled sig- nificantly less variable when fixating, compared to uncon- strained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze- centered coding and/or attentional control mechanisms in the brain. |
Christel Devue; Artem V. Belopolsky; Jan Theeuwes Oculomotor guidance and capture by irrelevant faces Journal Article In: PLoS ONE, vol. 7, no. 4, pp. e34598, 2012. @article{Devue2012, Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning. |
Michael D. Dodd; Amanda Balzer; Carly M. Jacobs; Michael W. Gruszczynski; Kevin B. Smith; John R. Hibbing The political left rolls with the good and the political right confronts the bad: Connecting physiology and cognition to preferences Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 367, pp. 640–649, 2012. @article{Dodd2012, We report evidence that individual-level variation in people's physiological and attentional responses to aversive and appetitive stimuli are correlated with broad political orientations. Specifically, we find that greater orientation to aversive stimuli tends to be associated with right-of-centre and greater orientation to appetitive (pleasing) stimuli with left-of-centre political inclinations. These findings are consistent with recent evidence that political views are connected to physiological predispositions but are unique in incorporating findings on variation in directed attention that make it possible to understand additional aspects of the link between the physiological and the political. |
Markus Bindemann; Adam Sandford; Katherine Gillatt; Meri Avetisyan; Ahmed M. Megreya Recognising faces seen alone or with others: Why are two heads worse than one? Journal Article In: Perception, vol. 41, no. 4, pp. 415–435, 2012. @article{Bindemann2012, The ability to identify an unfamiliar target face from an identity lineup declines when it is accompanied by a second face during visual encoding. This two-face disadvantage is still little studied and its basis remains poorly understood. This study investigated several possible explanations for this phenomenon. Experiments 1 and 2 varied the number of potential targets (1 or 2) and the number of faces in a lineup (5 or 10) to explore if this effect arises from the number of identity comparisons that need to be made to detect a target in a lineup. These experiments also explored if this effect arises from an uncertainty concerning which is the to-be-identified target in two-face displays, by cueing the relevant face during encoding. Experiment 3 then examined whether the two-face disadvantage reflects the depth of face encoding or a memory effect. The results show that this effect arises from the additional comparisons that are necessary to compare two potential targets to an identity lineup when memory demands are minimized (Experiment 1), but it reflects a difficulty in remembering several faces when targets and lineups cannot be viewed simultaneously (Experiments 2 and 3). However, in both cases the two-face disadvantage could not be eliminated fully by cueing the target. This hints at a further possible locus for this effect, which might reflect perceptual interference during the initial encoding of the target. The implications of these findings are discussed. |
Gary D. Bird; Johan Lauwereyns; Matthew T. Crawford The role of eye movements in decision making and the prospect of exposure effects Journal Article In: Vision Research, vol. 60, pp. 16–21, 2012. @article{Bird2012, The aim of the current study was to follow on from previous findings that eye movements can have a causal influence on preference formation. Shimojo et al. (2003) previously found that faces that were pre- sented for a longer duration in a two alternative forced choice task were more likely to be judged as more attractive. This effect only occurred when an eye movement was made towards the faces (with no effect when faces were centrally presented). The current study replicated Shimojo et al.'s (2003) design, whilst controlling for potential inter-stimuli interference in central presentations. As per previous findings, when eye movements were made towards the stimuli, faces that were presented for longer durations were preferred. However, faces that were centrally presented (thus not requiring an eye movement) were also preferred in the current study. The presence of an exposure duration effect for centrally presented faces casts doubt on the necessity of the eye movement in this decision making process and has impli- cations for decision theories that place an emphasis on the role of eye movements in decision making. |
Daniel P. Blakely; Timothy J. Wright; Vincent M. Dehili; Walter R. Boot; James R. Brockmole Characterizing the time course and nature of attentional disengagement effects Journal Article In: Vision Research, vol. 56, pp. 38–48, 2012. @article{Blakely2012, Visual features of fixated but irrelevant items contribute to both how long overt attention dwells at a location and to decisions regarding the location of subsequent attention shifts (Boot & Brockmole, 2010; Brockmole & Boot, 2009). Fixated but irrelevant search items that share the color of the search target delay the deployment of attention. Furthermore, eye movements are biased to distractors that share the color of the currently fixated item. We present a series of experiments that examined these effects in depth. Experiment 1 explored the time course of disengagement effects. Experiments 2 and 3 explored the generalizability of disengagement effects by testing whether they could be observed when participants searched for targets defined by form instead of color. Finally, Experiment 4 validated the disengagement paradigm as a measure of disengagement and ruled out alternative explanations for slowed saccadic reaction times. Results confirm and extend our understanding of the influence of features within the focus of attention on when and where attention will shift next. |
Ebrahim Pishyareh; Mehdi Tehrani-Doost; Javad Mahmoudi-Gharaei; Anahita Khorrami; Mitra Joudi; Mehrnoosh Ahmadi Attentional bias towards emotional scenes in boys with attention deficit hyperactivity disorder Journal Article In: Iranian Journal of Psychiatry, vol. 7, no. 2, pp. 93–96, 2012. @article{Pishyareh2012, OBJECTIVE: Children with attention-deficit/hyperactivity disorder (ADHD) react explosively and inappropriately to emotional stimuli. It could be hypothesized that these children have some impairment in attending to emotional cues. Based on this hypothesis, we conducted this study to evaluate visual directions of children with ADHD towards paired emotional scenes. METHOD: Thirty boys between the ages of 6 and 11 years diagnosed with ADHD were compared with 30 age-matched normal boys. All participants were presented paired emotional and neutral scenes in the four following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral - neutral. Meanwhile, their visual orientations towards these pictures were evaluated using the eye tracking system. The number and duration of first fixation and duration of first gaze were compared between the two groups using the MANOVA analysis. The performance of each group in different categories was also analyzed using the Friedman test. RESULTS: With regards to duration of first gaze, which is the time taken to fixate on a picture before moving to another picture, ADHD children spent less time on pleasant pictures compared to normal group, while they were looking at pleasant - neutral and unpleasant - pleasant pairs. The duration of first gaze on unpleasant pictures was higher while children with ADHD were looking at unpleasant - neutral pairs (P<0.01). CONCLUSION: Based on the findings of this study it could be concluded that children with ADHD attend to unpleasant conditions more than normal children which leads to their emotional reactivity. |
Irina Pivneva; Caroline Palmer; Debra Titone Inhibitory control and L2 proficiency modulate bilingual language production: Evidence from spontaneous monologue and dialogue speech Journal Article In: Frontiers in Psychology, vol. 3, pp. 57, 2012. @article{Pivneva2012, Bilingual language production requires that speakers recruit inhibitory control (IC) to optimally balance the activation of more than one linguistic system when they produce speech. Moreover, the amount of IC necessary to maintain an optimal balance is likely to vary across individuals as a function of second language (L2) proficiency and inhibitory capacity, as well as the demands of a particular communicative situation. Here, we investigate how these factors relate to bilingual language production across monologue and dialogue spontaneous speech. In these tasks, 42 English–French and French–English bilinguals produced spontaneous speech in their first language (L1) and their L2, with and without a conversational partner. Participants also completed a separate battery that assessed L2 proficiency and inhibitory capacity. The results showed that L2 vs. L1 production was generally more effortful, as was dialogue vs. monologue speech production although the clarity of what was produced was higher for dialogues vs. monologues. As well, language production effort significantly varied as a function of individual differences in L2 proficiency and inhibitory capacity. Taken together, the overall pattern of findings suggests that both increased L2 proficiency and inhibitory capacity relate to efficient language production during spontaneous monologue and dialogue speech. |
Rachel McDonnell; Martin Breidty; Heinrich H. Bülthoff Render me real? Investigating the effect of render style on the perception of animated virtual humans Journal Article In: ACM Transactions on Graphics, vol. 31, no. 4, pp. 1–11, 2012. @article{McDonnell2012, The realistic depiction of lifelike virtual humans has been the goal of many movie makers in the last decade. Recently, films such as Tron: Legacy and The Curious Case of Benjamin Button have produced highly realistic characters. In the real-time domain, there is also a need to deliver realistic virtual characters, with the increase in popularity of interactive drama video games (such as L.A. NoireTM or Heavy RainTM). There have been mixed reactions from audiences to lifelike characters used in movies and games, with some saying that the increased realism highlights subtle imperfections, which can be disturbing. Some developers opt for a stylized rendering (such as cartoon-shading) to avoid a negative reaction [Thompson 2004]. In this paper, we investigate some of the consequences of choosing realistic or stylized rendering in order to provide guidelines for developers for creating appealing virtual characters. We conducted a series of psychophysical experiments to determine whether render style affects how virtual humans are perceived. Motion capture with synchronized eye-tracked data was used throughout to animate custom-made virtual model replicas of the captured actors. |
Robert D. McIntosh; Antimo Buonocore Dissociated effects of distractors on saccades and manual aiming Journal Article In: Experimental Brain Research, vol. 220, no. 3-4, pp. 201–211, 2012. @article{McIntosh2012, The remote distractor effect (RDE) is a robust phenomenon whereby target-directed saccades are delayed by the appearance of a distractor. This effect persists even when the target location is perfectly predictable. The RDE has been studied extensively in the oculomotor domain but it is unknown whether it generalises to other spatially oriented responses. In three experiments, we tested whether the RDE generalises to manual aiming. Experiment 1 required participants to move their hand or eyes to predictable targets presented alone or accompanied by a distractor in the opposite hemifield. The RDE was observed for the eyes but not for the hand. Experiment 2 replicated this dissociation in a more naturalistic task in which eye movements were not constrained during manual aiming. Experiment 3 confirmed the lack of manual RDE across a wider range of distractor delays (0, 50, 100, and 150 ms). Our data imply that the RDE is specific to the oculomotor system, at least for non-foveal distractors. We suggest that the oculomotor RDE reflects competitive interactions between target and distractor representations in the superior colliculus, which are not necessarily shared by manual aiming. |
Ahmed M. Megreya; Markus Bindemann; Catriona Havard; A. Mike Burton Identity-lineup location influences target selection: Evidence from eye movements Journal Article In: Journal of Police and Criminal Psychology, vol. 27, no. 2, pp. 167–178, 2012. @article{Megreya2012, Eyewitnesses often have to recognize the perpetrators of an observed crime from identity lineups. In the construction of these lineups, a decision must be made concerning where a suspect should be placed, but whether location in a lineup affects the identification of a perpetrator has received little attention. This study explored this problem with a face-matching task, in which observers decided if pairs of faces depict the same person or two different people (Experiment 1), and with a lineup task in which the presence of a target had to be detected in an identity parade of five faces (Experiment 2). In addition, this study also explored if high accuracy is related to a perceptual pop-out effect, whereby the target is detected rapidly among the lineup. In both experiments, observers' eye movements revealed that location determines the order in which people were viewed, whereby faces on the left side were consistently viewed first. This location effect was reflected also in observers' responses, so that a foil face on the left side of a lineup display was more likely to be misidentified as the target. However, identification accuracy was not related to a pop-out effect. The implications of these findings are discussed. |
Ben Meijering; Hedderik Rijn; Niels A. Taatgen; Rineke Verbrugge What eye movements can tell about theory of mind in a strategic game Journal Article In: PLoS ONE, vol. 7, no. 9, pp. e45961, 2012. @article{Meijering2012, This study investigates strategies in reasoning about mental states of others, a process that requires theory of mind. It is a first step in studying the cognitive basis of such reasoning, as strategies affect tradeoffs between cognitive resources. Participants were presented with a two-player game that required reasoning about the mental states of the opponent. Game theory literature discerns two candidate strategies that participants could use in this game: either forward reasoning or backward reasoning. Forward reasoning proceeds from the first decision point to the last, whereas backward reasoning proceeds in the opposite direction. Backward reasoning is the only optimal strategy, because the optimal outcome is known at each decision point. Nevertheless, we argue that participants prefer forward reasoning because it is similar to causal reasoning. Causal reasoning, in turn, is prevalent in human reasoning. Eye movements were measured to discern between forward and backward progressions of fixations. The observed fixation sequences corresponded best with forward reasoning. Early in games, the probability of observing a forward progression of fixations is higher than the probability of observing a backward progression. Later in games, the probabilities of forward and backward progressions are similar, which seems to imply that participants were either applying backward reasoning or jumping back to previous decision points while applying forward reasoning. Thus, the game-theoretical favorite strategy, backward reasoning, does seem to exist in human reasoning. However, participants preferred the more familiar, practiced, and prevalent strategy: forward reasoning. |
David Melcher; Alessio Fracasso Remapping of the line motion illusion across eye movements Journal Article In: Experimental Brain Research, vol. 218, no. 4, pp. 503–514, 2012. @article{Melcher2012, Although motion processing in the brain has been classically studied in terms of retinotopically defined receptive fields, recent evidence suggests that motion perception can occur in a spatiotopic reference frame. We investigated the underlying mechanisms of spatiotopic motion perception by examining the role of saccade metrics as well as the capacity of trans-saccadic motion. To this end, we used the line motion illusion (LMI), in which a straight line briefly shown after a high contrast stimulus (inducer) is perceived as expanding away from the inducer position. This illusion provides an interesting test of spatiotopic motion because the neural correlates of this phenomenon have been found early in the visual cortex and the effect does not require focused attention. We measured the strength of LMI both with stable fixation and when participants were asked to perform a 10° saccade during the blank ISI between the inducer and the line. A strong motion illusion was found across saccades in spatiotopic coordinates. When the inducer was presented near in time to the saccade cue, saccadic latencies were longer, saccade amplitudes were shorter, and the strength of reported LMI was consistently reduced. We also measured the capacity of the trans-saccadic LMI by varying the number of inducers. In contrast to a visual-spatial memory task, we found that the LMI was largely eliminated by saccades when two or more inducers were displayed. Together, these results suggest that motion perceived in non-retinotopic coordinates depends on an active, saccade-dependent remapping process with a strictly limited capacity. |
Tamaryn Menneer; Michael J. Stroud; Kyle R. Cave; Xingshan Li; Hayward J. Godwin; Simon P. Liversedge; Nick Donnelly Search for two categories of target produces fewer fixations to target-color items Journal Article In: Journal of Experimental Psychology: Applied, vol. 18, no. 4, pp. 404–418, 2012. @article{Menneer2012, Searching simultaneously for metal threats (guns and knives) and improvised explosive devices (IEDs) in X-ray images is less effective than 2 independent single-target searches, 1 for metal threats and 1 for IEDs. The goals of this study were to (a) replicate this dual-target cost for categorical targets and to determine whether the cost remains when X-ray images overlap, (b) determine the role of attentional guidance in this dual-target cost by measuring eye movements, and (c) determine the effect of practice on guidance. Untrained participants conducted 5,376 trials of visual search of X-ray images, each specializing in single-target search for metal threats, single-target search for IEDs, or dual-target search for both. In dual-target search, only 1 target (metal threat or IED) at most appeared on any 1 trial. Eye movements, response time, and accuracy were compared across single-target and dual-target searches. Results showed a dual-target cost in response time, accuracy, and guidance, with fewer fixations to target-color objects and disproportionately more to non-target-color objects, compared with single-target search. Such reduction in guidance explains why targets are missed in dual-target search, which was particularly noticeable when objects overlapped. After extensive practice, accuracy, response time, and guidance remained better in single-target search than in dual-target search. The results indicate that, when 2 different target representations are required for search, both representations cannot be maintained as accurately as in separate single-target searches. They suggest that baggage X-ray security screeners should specialize in one type of threat, or be trained to conduct 2 independent searches, 1 for each threat item. |
Adam Palanica; Roxane J. Itier Attention capture by direct gaze is robust to context and task demands Journal Article In: Journal of Nonverbal Behavior, vol. 36, no. 2, pp. 123–134, 2012. @article{Palanica2012, Eye-tracking was used to investigate whether gaze direction would influence the visual scanning of faces, when presented in the context of a full character, in different social settings, and with different task demands. Participants viewed individual computer agents against either a blank background or a bar scene setting, during both a free-viewing task and an attractiveness rating task for each character. Faces with a direct gaze were viewed longer than faces with an averted gaze regardless of body context, social settings, and task demands. Additionally, participants evaluated characters with a direct gaze as more attractive than characters with an averted gaze. These results, obtained with pictures of computer agents rather than real people, suggest that direct gaze is a powerful attention grabbing stimulus that is robust to background context or task demands. |
Femke Maij; Maria Matziridi; Jeroen B. J. Smeets; Eli Brenner Luminance contrast in the background makes flashes harder to detect during saccades Journal Article In: Vision Research, vol. 60, pp. 22–27, 2012. @article{Maij2012, To explore a visual scene we make many fast eye movements (saccades) every second. During those saccades the image of the world shifts rapidly across our retina. These shifts are normally not detected, because perception is suppressed during saccades. In this paper we study the origin of this saccadic suppression by examining the influence of luminance borders in the background on the perception of flashes presented near the time of saccades in a normally illuminated room. We used different types of backgrounds: either with isoluminant red and green areas or with black and white areas. We found that the ability to perceive flashes that were presented during saccades was suppressed when there were luminance borders in the background, but not when there were isoluminant color borders in the background. Thus, masking by moving luminance borders plays an important role in saccadic suppression. The perceived positions of detected flashes were only influenced by the borders between the areas in the background when the flashes were presented . before or . after the saccades. Moreover, the influence did not depend on the kind of contrast forming the border. Thus, the masking effect of moving luminance borders does not appear to play an important role in the mislocalization of flashes that are presented near the time of saccades. |
Tal Seidel Malkinson; Ayelet McKyton; Ehud Zohary Motion adaptation reveals that the motion vector is represented in multiple coordinate frames Journal Article In: Journal of Vision, vol. 12, no. 6, pp. 1–11, 2012. @article{Malkinson2012, Accurately perceiving the velocity of an object during smooth pursuit is a complex challenge: although the object is moving in the world, it is almost still on the retina. Yet we can perceive the veridical motion of a visual stimulus in such conditions, suggesting a nonretinal representation of the motion vector. To explore this issue, we studied the frames of representation of the motion vector by evoking the well known motion aftereffect during smooth-pursuit eye movements (SPEM). In the retinotopic configuration, due to an accompanying smooth pursuit, a stationary adapting random-dot stimulus was actually moving on the retina. Motion adaptation could therefore only result from motion in retinal coordinates. In contrast, in the spatiotopic configuration, the adapting stimulus moved on the screen but was practically stationary on the retina due to a matched SPEM. Hence, adaptation here would suggest a representation of the motion vector in spatiotopic coordinates. We found that exposure to spatiotopic motion led to significant adaptation. Moreover, the degree of adaptation in that condition was greater than the adaptation induced by viewing a random-dot stimulus that moved only on the retina. Finally, pursuit of the same target, without a random-dot array background, yielded no adaptation. Thus, in our experimental conditions, adaptation is not induced by the SPEM per se. Our results suggest that motion computation is likely to occur in parallel in two distinct representations: a low-level, retinal-motion dependent mechanism and a high-level representation, in which the veridical motion is computed through integration of information from other sources. |
Jennifer Malsert; Nathalie Guyader; Alan Chauvin; Christian Marendaz In: Cognitive Neuroscience, vol. 3, no. 2, pp. 105–111, 2012. @article{Malsert2012, Instructing participants to "identify a target" dramatically reduces saccadic reaction times in prosaccade tasks (PS). However, it has been recently shown that this effect disappears in antisaccade tasks (AS). The instruction effect observed in PS may result from top-down processes, mediated by pathways connecting the prefrontal cortex (PFC) to the superior colliculus. In AS, the PFC's prior involvement is in competition with the instruction process, annulling its effect. This study aims to discover whether the instruction effect persists in mixed paradigms. According to Dyckman's fMRI study (2007), the difficulty of mixed tasks leads to PFC involvement. The antisaccade-related PFC activation observed on comparison of blocked AS and PS therefore disappears when the two are compared in mixed paradigms. However, we continued to observe the instruction effect for both PS and AS. We therefore posit different types of PFC activation: phasic during blocked AS, and tonic during mixed saccadic experiments. |
Jennifer Malsert; Nathalie Guyader; Alan Chauvin; Mircea Polosan; Emmanuel Poulet; David Szekely; Thierry Bougerol; Christian Marendaz Antisaccades as a follow-up tool in major depressive disorder therapies: A pilot study Journal Article In: Psychiatry Research, vol. 200, no. 2-3, pp. 1051–1053, 2012. @article{Malsert2012a, Eight patients with major depression, included in a double-blind study, performed an antisaccade task. Results suggested a link between antisaccade performances and clinical scale scores in patients who respond to therapy. Moreover, error rates may well predict response from day of inclusion, thus serving as a state-marker for mood disorders. |
Pamela J. Marsh; Gemma Luckett; Tamara A. Russell; Max Coltheart; Melissa J. Green Effects of facial emotion recognition remediation on visual scanning of novel face stimuli Journal Article In: Schizophrenia Research, vol. 141, no. 2-3, pp. 234–240, 2012. @article{Marsh2012, Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. |
Sebastiaan Mathôt; Jan Theeuwes It's all about the transient: Intra-saccadic onset stimuli do not capture attention Journal Article In: Journal of Eye Movement Research, vol. 5, no. 2, pp. 1–12, 2012. @article{Mathot2012, An abrupt onset stimulus was presented while the participants' eyes were in motion. Be- cause of saccadic suppression, participants did not perceive the visual transient that nor- mally accompanies the sudden appearance of a stimulus. In contrast to the typical finding that the presentation of an abrupt onset captures attention and interferes with the parti - cipants' responses, we found that an intra-saccadic abrupt onset does not capture attention: It has no effect beyond that of increasing the set-size of the search array by one item. This finding favours the local transient account of attentional capture over the novel object hypothesis. |
Justin T. Maxfield; Gregory J. Zelinsky Searching through the hierarchy: How level of target categorization affects visual search Journal Article In: Visual Cognition, vol. 20, no. 10, pp. 1153–1163, 2012. @article{Maxfield2012, Does the same basic-level advantage commonly observed in the categorization literature also hold for targets in a search task? We answered this question by first conducting a category verification task to define a set of categories showing a standard basic-level advantage, which we then used as stimuli in a search experiment. Participants were cued with a picture preview of the target or its category name at either superordinate, basic, or subordinate levels, then shown a target-present/absent search display. Although search guidance and target verification was best using pictorial cues, the effectiveness of the categorical cues depended on the hierarchical level. Search guidance was best for the specific subordinate-level cues, whereas target verification showed a standard basic-level advantage. These findings demonstrate different hierarchical advantages for guidance and verification in categorical search. We interpret these results as evidence for a common target representation underlying categorical search guidance and verification. |
Leanne Quigley; Andrea L. Nelson; Jonathan Carriere; Daniel Smilek; Christine Purdon The effects of trait and state anxiety on attention to emotional images: An eye-tracking study Journal Article In: Cognition and Emotion, vol. 26, no. 8, pp. 1390–1411, 2012. @article{Quigley2012, Attentional biases for threatening stimuli have been implicated in the development of anxiety disorders. However, little is known about the relative influences of trait and state anxiety on attentional biases. This study examined the effects of trait and state anxiety on attention to emotional images. Low, mid, and high trait anxious participants completed two trial blocks of an eye-tracking task. Participants viewed image pairs consisting of one emotional (threatening or positive) and one neutral image while their eye movements were recorded. Between trial blocks, participants underwent an anxiety induction. Primary analyses examined the effects of trait and state anxiety on the proportion of viewing time on emotional versus neutral images. State anxiety was associated with increased attention to threatening images for participants, regardless of trait anxiety. Furthermore, when in a state of anxiety, relative to a baseline condition, durations of initial gaze and average fixation were longer on threat versus neutral images. These findings were specific to the threatening images; no anxiety-related differences in attention were found with the positive images. The implications of these results for future research, models of anxiety-related information processing, and clinical interventions for anxiety are discussed. |
Chao Hsuan Liu; Ovid J. L. Tzeng; Daisy L. Hung; Philip Tseng; Chi-Hung Juan Investigation of bistable perception with the "silhouette spinner": Sit still, spin the dancer with your will Journal Article In: Vision Research, vol. 60, pp. 34–39, 2012. @article{Liu2012, Many studies have used static and non-biologically related stimuli to investigate bistable perception and found that the percept is usually dominated by their intrinsic nature with some influence of voluntary control from the viewer. Here we used a dynamic stimulus of a rotating human body, the silhouette spinner illusion, to investigate how the viewers' intentions may affect their percepts. In two experiments, we manipulated observer intention (active or passive), fixation position (body or feet), and spinning velocity (fast, medium, or slow). Our results showed that the normalized alternating rate between two bistable percepts was greater when (1) participants actively attempted to switch percepts, (2) when participants fixated at the spinner's feet rather than the body, inducing as many as 25 switches of the bistable percepts within 1. min, and (3) when they watched the spinner at high velocity. These results suggest that a dynamic biologically-bistable percept can be quickly alternated by the viewers' intention. Furthermore, the higher alternating rate in the feet condition compared to the body condition suggests a role for biological meaningfulness in determining bistable percepts, where 'biologically plausible' interpretations are favored by the visual system. |
Wei Liu; Chengkun Liu; Damin Zhuang; Zhong Qi Liu; Xiugan Yuan Comparison of expert and novice eye movement behaviors during landing flight Journal Article In: Advanced Materials Research, vol. 383-390, pp. 2556–2560, 2012. @article{Liu2012a, Objective: To study expert and novice eye movement pattern during simulated landing flight for providing references to evaluate flight performance and training of pilots. Methods The subjects were divided in to two group s of expert and novice according to their flight simulation experience. Eye movement data were recorded when they were performing landing task. Comparison of expert and novice flight performance data and eye movement data was made. Results: It was found that the differences between expert and novice lay not only in flight performance but also in eye movement pattern. Performance of expert was better than novice. Expert had shorter fixation time, more fixation points, faster scan velocity, greater scan frequency and wider scan area than novice. It was also found that eye movement pattern of expert bring lower mental workload than novice. Conclusion: Flight performance is related to eye movement pattern. Effective eye movement pattern is related to good flight performance. The analysis of eye movement indices can evaluate pilots' flight performance and provide reference for flight training. |
Aspasia E. Paltoglou; Peter Neri Attentional control of sensory tuning in human visual perception Journal Article In: Journal of Neurophysiology, vol. 107, no. 5, pp. 1260–1274, 2012. @article{Paltoglou2012, Attention is known to affect the response properties of sensory neurons in visual cortex. These effects have been traditionally classified into two categories: 1) changes in the gain (overall amplitude) of the response; and 2) changes in the tuning (selectivity) of the response. We performed an extensive series of behavioral measurements using psychophysical reverse correlation to understand whether/how these neuronal changes are reflected at the level of our perceptual experience. This question has been addressed before, but by different laboratories using different attentional manipulations and stimuli/tasks that are not directly comparable, making it difficult to extract a comprehensive and coherent picture from existing literature. Our results demonstrate that the effect of attention on response gain (not necessarily associated with tuning change) is relatively aspecific: it occurred across all the conditions we tested, including attention directed to a feature orthogonal to the primary feature for the assigned task. Sensory tuning, however, was affected primarily by feature-based attention and only to a limited extent by spatially directed attention, in line with existing evidence from the electrophysiological and behavioral literature. |
Juan A. Pérez; Stefano Passini Avoiding minorities: Social invisibility Journal Article In: European Journal of Social Psychology, vol. 42, no. 7, pp. 864–874, 2012. @article{Perez2012, Three experiments examined how self-consciousness has an impact on the visual exploration of a social field. The main hypothesis was that merely a photograph of people can trigger a dynamic process of social visual interaction such that minority images are avoided when people are in a state of self-reflective consciousness. In all three experiments, pairs of pictures—one with characters of social minorities and one with characters of social majorities—were shown to the participants. By means of eye-tracking technology, the results of Experiment 1 (n = 20) confirmed the hypothesis that in the reflective consciousness condition, people look more at the majority than minority characters. The results of Experiment 2 (n = 89) confirmed the hypothesis that reflective consciousness also induces avoiding reciprocal visual interaction with minorities. Finally, by manipulating the visual interaction (direct vs. non-direct) with the photos of minority and majority characters, the results of Experiment 3 (n = 56) confirmed the hypothesis that direct visual interaction with minority characters is perceived as being longer and more aversive. The overall conclusion is that self-reflective consciousness leads people to avoid visual interaction with social minorities, consigning them to social invisibility. |
Carolyn J. Perry; Mazyar Fallah Color improves speed of processing but not perception in a motion illusion Journal Article In: Frontiers in Psychology, vol. 3, pp. 92, 2012. @article{Perry2012, When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored. |
Yoni Pertzov; Mia Yuan Dong; Muy Cheng Peich; Masud Husain Forgetting what was where: The fragility of object-location binding Journal Article In: PLoS ONE, vol. 7, no. 10, pp. e48214, 2012. @article{Pertzov2012, Although we frequently take advantage of memory for objects locations in everyday life, understanding how an object's identity is bound correctly to its location remains unclear. Here we examine how information about object identity, location and crucially object-location associations are differentially susceptible to forgetting, over variable retention intervals and memory load. In our task, participants relocated objects to their remembered locations using a touchscreen. When participants mislocalized objects, their reports were clustered around the locations of other objects in the array, rather than occurring randomly. These 'swap' errors could not be attributed to simple failure to remember either the identity or location of the objects, but rather appeared to arise from failure to bind object identity and location in memory. Moreover, such binding failures significantly contributed to decline in localization performance over retention time. We conclude that when objects are forgotten they do not disappear completely from memory, but rather it is the links between identity and location that are prone to be broken over time. |
Anders Petersen; Søren Kyllingsbæk; Claus Bundesen Measuring and modeling attentional dwell time Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1029–1046, 2012. @article{Petersen2012, Attentional dwell time (AD) defines our inability to perceive spatially separate events when they occur in rapid succession. In the standard AD paradigm, subjects should identify two target stimuli presented briefly at differ- ent peripheral locations with a varied stimulus onset asyn- chrony (SOA). The AD effect is seen as a long-lasting impediment in reporting the second target, culminating at SOAs of 200–500 ms. Here, we present the first quantitative computational model of the effect—a theory of temporal visual attention. The model is based on the neural theory of visual attention (Bundesen, Habekost, & Kyllingsbæk, Psychological Review, 112, 291–328 2005) and introduces the novel assumption that a stimulus retained in visual short- term memory takes up visual processing-resources used to encode stimuli into memory. Resources are thus locked and cannot process subsequent stimuli until the stimulus in memory has been recoded, which explains the long-lasting AD effect. The model is used to explain results from two experiments providing detailed individual data from both a standard AD paradigm and an extension with varied expo- sure duration of the target stimuli. Finally, we discuss new predictions by the model. |
Mary A. Peterson; Laura Cacciamani; Morgan D. Barense; Paige E. Scalf The perirhinal cortex modulates V2 activity in response to the agreement between part familiarity and configuration familiarity Journal Article In: Hippocampus, vol. 22, no. 10, pp. 1965–1977, 2012. @article{Peterson2012, Research has demonstrated that the perirhinal cortex (PRC) represents complex object-level feature configurations, and participates in familiarity versus novelty discrimination. Barense et al. [(in press) Cerebral Cortex, 22:11, doi:10.1093/cercor/bhr347] postulated that, in addition, the PRC modulates part familiarity responses in lower-level visual areas. We used fMRI to measure activation in the PRC and V2 in response to silhouettes presented peripherally while participants maintained central fixation and performed an object recognition task. There were three types of silhouettes: Familiar Configurations portrayed real-world objects; Part-Rearranged Novel Configurations created by spatially rearranging the parts of the familiar configurations; and Control Novel Configurations in which both the configuration and the ensemble of parts comprising it were novel. For right visual field (RVF) presentation, BOLD responses revealed a significant linear trend in bilateral BA 35 of the PRC (highest activation for Familiar Configurations, lowest for Part-Rearranged Novel Configurations, with Control Novel Configurations in between). For left visual field (LVF) presentation, a significant linear trend was found in a different area (bilateral BA 38, temporal pole) in the opposite direction (Part-Rearranged Novel Configurations highest, Familiar Configurations lowest). These data confirm that the PRC is sensitive to the agreement in familiarity between the configuration level and the part level. As predicted, V2 activation mimicked that of the PRC: for RVF presentation, activity in V2 was significantly higher in the left hemisphere for Familiar Configurations than for Part-Rearranged Novel Configurations, and for LVF presentation, the opposite effect was found in right hemisphere V2. We attribute these patterns in V2 to feedback from the PRC because receptive fields in V2 encompass parts but not configurations. These results reveal two new aspects of PRC function: (1) it is sensitive to the congruency between the familiarity of object configurations and the parts comprising those configurations and (2) it likely modulates familiarity responses in visual area V2. |
Matthew F. Peterson; Miguel P. Eckstein Looking just below the eyes is optimal across face recognition tasks Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 48, pp. E3314–E3323, 2012. @article{Peterson2012a, When viewing a human face, people often look toward the eyes. Maintaining good eye contact carries significant social value and allows for the extraction of information about gaze direction. When identifying faces, humans also look toward the eyes, but it is unclear whether this behavior is solely a byproduct of the socially important eye movement behavior or whether it has functional importance in basic perceptual tasks. Here, we propose that gaze behavior while determining a person's identity, emotional state, or gender can be explained as an adaptive brain strategy to learn eye movement plans that optimize performance in these evolutionarily important perceptual tasks. We show that humans move their eyes to locations that maximize perceptual performance determining the identity, gender, and emotional state of a face. These optimal fixation points, which differ moderately across tasks, are predicted correctly by a Bayesian ideal observer that integrates information optimally across the face but is constrained by the decrease in resolution and sensitivity from the fovea toward the visual periphery (foveated ideal observer). Neither a model that disregards the foveated nature of the visual system and makes fixations on the local region with maximal information, nor a model that makes center-of-gravity fixations correctly predict human eye movements. Extension of the foveated ideal observer framework to a large database of real-world faces shows that the optimality of these strategies generalizes across the population. These results suggest that the human visual system optimizes face recognition performance through guidance of eye movements not only toward but, more precisely, just below the eyes. |
Sam London; Christopher W. Bishop; Lee M. Miller Spatial attention modulates the precedence effect Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 6, pp. 1371–1379, 2012. @article{London2012, Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such interference. In a rapid phenomenon known as the precedence effect, reflections are perceptually fused with the veridical primary sound. The brain can also use spatial attention to highlight a target sound at the expense of distracters. Although attention has been shown to modulate many auditory perceptual phenomena, rarely does it alter how acoustic energy is first parsed into objects, as with the precedence effect. This brief report suggests that both endogenous (voluntary) and exogenous (stimulus-driven) spatial attention have a profound influence on the precedence effect depending on where they are oriented. Moreover, we observed that both types of attention could enhance perceptual fusion while only exogenous attention could hinder it. These results demonstrate that attention, by altering how auditory objects are formed, guides the basic perceptual organization of our acoustic environment. |
Casimir J. H. Ludwig; Simon Farrell; Lucy A. Ellis; Tom E. Hardwicke; Iain D. Gilchrist Context-gated statistical learning and its role in visual-saccadic decisions Journal Article In: Journal of Experimental Psychology: General, vol. 141, no. 1, pp. 150–169, 2012. @article{Ludwig2012, Adaptive behavior in a nonstationary world requires humans to learn and track the statistics of the environment. We examined the mechanisms of adaptation in a nonstationary environment in the context of visual-saccadic inhibition of return (IOR). IOR is adapted to the likelihood that return locations will be refixated in the near future. We examined 2 potential learning mechanisms underlying adaptation: (a) a local tracking or priming mechanism that facilitates behavior that is consistent with recent experience and (b) a mechanism that supports retrieval of knowledge of the environmental statistics based on the contextual features of the environment. Participants generated sequences of 2 saccadic eye movements in conditions where the probability that the 2nd saccade was directed back to the previously fixated location varied from low (.17) to high (.50). In some conditions, the contingency was signaled by a contextual cue (the shape of the movement cue). Adaptation occurred in the absence of contextual signals but was more pronounced in the presence of contextual cues. Adaptation even occurred when different contingencies were randomly intermixed, showing the parallel formation of multiple associations between context and statistics. These findings are accounted for by an evidence accumulation framework in which the resting baseline of decision alternatives is adjusted on a trial-by-trial basis. This baseline tracks the subjective prior beliefs about the behavioral relevance of the different alternatives and is updated on the basis of the history of recent events and the contextual features of the current environment. |
Mackay Mackay; Moran Cerf; Christof Koch Evidence for two distinct mechanisms directing gaze in natural scenes Journal Article In: Journal of Vision, vol. 12, no. 4, pp. 1–12, 2012. @article{Mackay2012, Various models have been proposed to explain the interplay between bottom-up and top-down mechanisms in driving saccades rapidly to one or a few isolated targets. We investigate this relationship using eye-tracking data from subjects viewing natural scenes to test attentional allocation to high-level objects within a mathematical decision-making framework. We show the existence of two distinct types of bottom-up saliency to objects within a visual scene, which disappear within a few fixations, and modification of this saliency by top-down influences. Our analysis reveals a subpopulation of early saccades, which are capable of accurately fixating salient targets after prior fixation within the same image. These data can be described quantitatively in terms of bottom-up saliency, including an explicit face channel, weighted by top-down influences, determining the mean rate of rise of a decision-making model to a threshold that triggers a saccade. These results are compatible with a rapid subcortical pathway generating accurate saccades to salient targets after analysis by cortical mechanisms. |
Adrian M. Madsen; Adam M. Larson; Lester C. Loschky; N. Sanjay Rebello Differences in visual attention between those who correctly and incorrectly answer physics problems Journal Article In: Physical Review Special Topics - Physics Education Research, vol. 8, pp. 010122, 2012. @article{Madsen2012, This study investigated how visual attention differed between those who correctly versus incorrectly answered introductory physics problems. We recorded eye movements of 24 individuals on six different conceptual physics problems where the necessary information to solve the problem was contained in a diagram. The problems also contained areas consistent with a novicelike response and areas of high perceptual salience. Participants ranged from those who had only taken one high school physics course to those who had completed a Physics Ph.D. We found that participants who answered correctly spent a higher percentage of time looking at the relevant areas of the diagram, and those who answered incorrectly spent a higher percentage of time looking in areas of the diagram consistent with a novicelike answer. Thus, when solving physics problems, top-down processing plays a key role in guiding visual selective attention either to thematically relevant areas or novicelike areas depending on the accuracy of a student's physics knowledge. This result has implications for the use of visual cues to redirect individuals' attention to relevant portions of the diagrams and may potentially influence the way they reason about these problems. |
Stephanie Ahken; Gilles Comeau; Sylvie Hébert; Ramesh Balasubramaniam Eye movement patterns during the processing of musical and linguistic syntactic incongruities. Journal Article In: Psychomusicology: Music, Mind, and Brain, vol. 22, no. 1, pp. 18–25, 2012. @article{Ahken2012, It has been suggested that music and language share syntax-supporting brain mechanisms. Consequently, violations of syntax in either domain may have similar effects. The present study examined the effects of syntactic incongruities on eye movements and reading time in both music and language domains. In the music notation condition, the syntactic incongruities violated the prevailing musical tonality (i.e., the last bar of the incongruent sequence was a nontonic chord or nontonic note in the given key). In the linguistic condition, syntactic incongruities violated the expected grammatical structure (i.e., sentences with anomalies carrying the progressive –ing affix or the past tense inflection). Eighteen pianists were asked to sight-read and play musical phrases (music condition) and read sentences aloud (linguistic condition). Syntactic incongruities in both domains were associated with an increase in the mean proportion and duration of fixations in the target region of interest, as well as longer reading duration. The results are consistent with the growing evidence of a shared network of neural structures for syntactic processing, while not ruling out the possibility of independent networks for each domain. |
Robert G. Alexander; Gregory J. Zelinsky Effects of part-based similarity on visual search: The Frankenbear experiment Journal Article In: Vision Research, vol. 54, pp. 20–30, 2012. @article{Alexander2012, Do the target-distractor and distractor-distractor similarity relationships known to exist for simple stimuli extend to real-world objects, and are these effects expressed in search guidance or target verification? Parts of photorealistic distractors were replaced with target parts to create four levels of target-distractor similarity under heterogenous and homogenous conditions. We found that increasing target-distractor similarity and decreasing distractor-distractor similarity impaired search guidance and target verification, but that target-distractor similarity and heterogeneity/homogeneity interacted only in measures of guidance; distractor homogeneity lessens effects of target-distractor similarity by causing gaze to fixate the target sooner, not by speeding target detection following its fixation. |
Arjen Alink; Felix Euler; Nikolaus Kriegeskorte; Wolf Singer; Axel Kohler Auditory motion direction encoding in auditory cortex and high-level visual cortex Journal Article In: Human Brain Mapping, vol. 33, no. 4, pp. 969–978, 2012. @article{Alink2012, The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/ from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the V5þ). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direc- tion of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5þ. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high- level visual cortex in healthy humans. |
Ava-Ann Allman; Ulrich Ettinger; Ridha Joober; Gillian A. O'Driscoll Effects of methylphenidate on basic and higher-order oculomotor functions Journal Article In: Journal of Psychopharmacology, vol. 26, no. 11, pp. 1471–1479, 2012. @article{Allman2012, Eye movements are sensitive indicators of pharmacological effects on sensorimotor and cognitive processing. Methylphenidate (MPH) is one of the most prescribed medications in psychiatry. It is increasingly used as a cognitive enhancer by healthy individuals. However, little is known of its effect on healthy cognition. Here we used oculomotor tests to evaluate the effects of MPH on basic oculomotor and executive functions. Twenty-nine males were given 20mg of MPH orally in a double-blind placebo-controlled crossover design. Participants performed visually-guided saccades, sinusoidal smooth pursuit, predictive saccades and antisaccades one hour post-capsule administration. Heart rate and blood pressure were assessed prior to capsule administration, and again before and after task performance. Visually-guided saccade latency decreased with MPH (p<0.004). Smooth pursuit gain increased on MPH (p<0.001) and number of saccades during pursuit decreased (p<0.001). Proportion of predictive saccades increased on MPH (p<0.004), specifically in conditions with predictable timing. Peak velocity of predictive saccades increased with MPH (p<0.01). Antisaccade errors and latency were unaffected. Physiological variables were also unaffected. The effects on visually-guided saccade latency and peak velocity are consistent with MPH effects on dopamine in basal ganglia. The improvements in predictive saccade conditions and smooth pursuit suggest effects on timing functions. |
Kaoru Amano; Tsunehiro Takeda; Tomoki Haji; Masahiko Terao; Kazushi Maruya; Kenji Matsumoto; Ikuya Murakami; Shin'ya Nishida Human neural responses involved in spatial pooling of locally ambiguous motion signals Journal Article In: Journal of Neurophysiology, vol. 107, no. 12, pp. 3493–3508, 2012. @article{Amano2012, Early visual motion signals are local and one-dimensional (1-D). For specification of global two-dimensional (2-D) motion vectors, the visual system should appropriately integrate these signals across orientation and space. Previous neurophysiological studies have suggested that this integration process consists of two computational steps (estimation of local 2-D motion vectors, followed by their spatial pooling), both being identified in the area MT. Psychophysical findings, however, suggest that under certain stimulus conditions, the human visual system can also compute mathematically correct global motion vectors from direct pooling of spatially distributed 1-D motion signals. To study the neural mechanisms responsible for this novel 1-D motion pooling, we conducted human magnetoencephalography (MEG) and functional MRI experiments using a global motion stimulus comprising multiple moving Gabors (global-Gabor motion). In the first experiment, we measured MEG and blood oxygen level-dependent responses while changing motion coherence of global-Gabor motion. In the second experiment, we investigated cortical responses correlated with direction-selective adaptation to the global 2-D motion, not to local 1-D motions. We found that human MT complex (hMT+) responses show both coherence dependency and direction selectivity to global motion based on 1-D pooling. The results provide the first evidence that hMT+ is the locus of 1-D motion pooling, as well as that of conventional 2-D motion pooling. |
Brian A. Anderson; Steven Yantis Value-driven attentional and oculomotor capture during goal-directed, unconstrained viewing Journal Article In: Attention, Perception, and Psychophysics, vol. 74, pp. 1644–1653, 2012. @article{Anderson2012, Covert shifts of attention precede and direct overt eye movements to stimuli that are task relevant or physically salient. A growing body of evidence suggests that the learned value of perceptual stimuli strongly influences their attentional priority. For example, previously rewarded but otherwise irrelevant and inconspicuous stimuli capture covert attention involuntarily. It is unknown, however, whether stimuli also draw eye movements involuntarily as a consequence of their reward history. Here, we show that previously rewarded but currently task-irrelevant stimuli capture both attention and the eyes. Value-driven oculomotor capture was observed during unconstrained viewing, when neither eye movements nor fixations were required, and was strongly related to individual differences in visual working memory capacity. The appearance of a reward-associated stimulus came to evoke pupil dilation over the course of training, which provides physiological evidence that the stimuli that elicit value-driven capture come to serve as reward-predictive cues. These findings reveal a close coupling of value-driven attentional capture and eye movements that has broad implications for theories of attention and reward learning. |
Camille Morvan; Laurence T. Maloney Human visual search does not maximize the post-saccadic probability of identifying targets Journal Article In: PLoS Computational Biology, vol. 8, no. 2, pp. e1002342, 2012. @article{Morvan2012, Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. |
Mercer Moss; Felix Joseph; Roland J. Baddeley; Nishan Canagarajah Eye movements to natural images as a function of sex and Ppersonality Journal Article In: PLoS ONE, vol. 7, no. 11, pp. e47870, 2012. @article{Moss2012, Women and men are different. As humans are highly visual animals, these differences should be reflected in the pattern of eye movements they make when interacting with the world. We examined fixation distributions of 52 women and men while viewing 80 natural images and found systematic differences in their spatial and temporal characteristics. The most striking of these was that women looked away and usually below many objects of interest, particularly when rating images in terms of their potency. We also found reliable differences correlated with the images' semantic content, the observers' personality, and how the images were semantically evaluated. Information theoretic techniques showed that many of these differences increased with viewing time. These effects were not small: the fixations to a single action or romance film image allow the classification of the sex of an observer with 64% accuracy. While men and women may live in the same environment, what they see in this environment is reliably different. Our findings have important implications for both past and future eye movement research while confirming the significant role individual differences play in visual attention. |
Albert Moukheiber; Gilles Rautureau; Fernando Perez-Diaz; Roland Jouvent; Antoine Pelissolo Gaze behaviour in social blushers Journal Article In: Psychiatry Research, vol. 200, no. 2-3, pp. 614–619, 2012. @article{Moukheiber2012, Gaze aversion could be a central component of social phobia. Fear of blushing is a symptom of social anxiety disorder (SAD) but is not yet described as a specific diagnosis in psychiatric classifications. Our research consists of comparing gaze aversion in SAD participants with or without fear of blushing in front of pictures of different emotional faces using an eye tracker. Twenty-six participants with DSM-IV SAD and expressed fear of blushing (SAD+FB) were recruited in addition to twenty-five participants with social phobia and no fear of blushing (SAD-FB). Twenty-four healthy participants aged and sex matched constituted the control group. We studied the number of fixations and the dwell time in the eyes area on the pictures. The results showed gaze avoidance in the SAD-FB group when compared to controls and when compared to the SAD+FB group. However we found no significant difference between SAD+FB and controls. We also observed a correlation between the severity of the phobia and the degree of gaze avoidance across groups. These findings seem to support the claim that social phobia is a heterogeneous disorder. Further research is advised to decide whether fear of blushing can constitute a subtype with specific behavioral characteristics. |
Marnix Naber; Maximilian Hilger; Wolfgang Einhäuser Animal detection and identification in natural scenes: Image statistics and emotional valence Journal Article In: Journal of Vision, vol. 12, no. 1, pp. 1–24, 2012. @article{Naber2012, Humans process natural scenes rapidly and accurately. Low-level image features and emotional valence affect such processing but have mostly been studied in isolation. At which processing stage these factors operate and how they interact has remained largely unaddressed. Here, we briefly presented natural images and asked observers to report the presence or absence of an animal (detection), species of the detected animal (identification), and their confidence. In a second experiment, the same observers rated images with respect to their emotional affect and estimated their anxiety when imagining a real-life encounter with the depicted animal. We found that detection and identification improved with increasing image luminance, background contrast, animal saturation, and luminance plus color contrast between target and background. Surprisingly, animals associated with lower anxiety were detected faster and identified with higher confidence, and emotional affect was a better predictor of performance than anxiety. Pupil size correlated with detection, identification, and emotional valence judgments at different time points after image presentation. Remarkably, images of threatening animals induced smaller pupil sizes, and observers with higher mean anxiety ratings had smaller pupils on average. In sum, rapid visual processing depends on contrasts between target and background features rather than overall visual context, is negatively affected by anxiety, and finds its processing stages differentially reflected in the pupillary response. |
Kazuyo Nakabayashi; Toby J. Lloyd-Jones; Natalie Butcher; Chang Hong Liu Independent influences of verbalization and race on the configural and featural processing of faces: A behavioral and eye movement study Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 38, no. 1, pp. 61–77, 2012. @article{Nakabayashi2012, Describing a face in words can either hinder or help subsequent face recognition. Here, the authors examined the relationship between the benefit from verbally describing a series of faces and the same-race advantage (SRA) whereby people are better at recognizing unfamiliar faces from their own race as compared with those from other races. Verbalization and the SRA influenced face recognition independently, as evident on both behavioral (Experiment 1) and eye movement measures (Experiment 2). The findings indicate that verbalization and the SRA each recruit different types of configural processing, with verbalization modulating face learning and the SRA modulating both face learning and recognition. Eye movement patterns demonstrated greater feature sampling for describing as compared with not describing faces and for other-race as compared with same-race faces. In both cases, sampling of the eyes, nose, and mouth played a major role in performance. The findings support a single process account whereby verbalization can influence perceptual processing in a flexible and yet fundamental way through shifting one's processing orientation. |
Joseph Arizpe; Dwight J. Kravitz; Galit Yovel; Chris I. Baker Start position strongly influences fixation patterns during face processing: Difficulties with eye movements as a measure of information use Journal Article In: PLoS ONE, vol. 7, no. 2, pp. e31106, 2012. @article{Arizpe2012, Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute fixation locations as indicative of information use, particularly at a fine spatial scale. |
Hiroshi Ashida; Ichiro Kuriki; Ikuya Murakami; Rumi Hisakata; Akiyoshi Kitaoka Direction-specific fMRI adaptation reveals the visual cortical network underlying the "Rotating Snakes" illusion Journal Article In: NeuroImage, vol. 61, no. 4, pp. 1143–1152, 2012. @article{Ashida2012, The "Rotating Snakes" figure elicits a clear sense of anomalous motion in stationary repetitive patterns. We used an event-related fMRI adaptation paradigm to investigate cortical mechanisms underlying the illusory motion. Following an adapting stimulus (S1) and a blank period, a probe stimulus (S2) that elicited illusory motion either in the same or in the opposite direction was presented. Attention was controlled by a fixation task, and control experiments precluded explanations in terms of artefacts of local adaptation, afterimages, or involuntary eye movements. Recorded BOLD responses were smaller for S2 in the same direction than S2 in the opposite direction in V1-V4, V3A, and MT+, indicating direction-selective adaptation. Adaptation in MT. + was correlated with adaptation in V1 but not in V4. With possible downstream inheritance of adaptation, it is most likely that adaptation predominantly occurred in V1. The results extend our previous findings of activation in MT. + (I. Kuriki, H. Ashida, I. Murakami, and A. Kitaoka, 2008), revealing the activity of the cortical network for motion processing from V1 towards MT+. This provides evidence for the role of front-end motion detectors, which has been assumed in proposed models of the illusion. |
Ricky K. C. Au; Fuminori Ono; Katsumi Watanabe Time dilation induced by object motion is based on spatiotopic but not retinotopic positions Journal Article In: Frontiers in Psychology, vol. 3, pp. 58, 2012. @article{Au2012, Time perception of visual events depends on the visual attributes of the scene. Previous studies reported that motion of object can induce an illusion of lengthened time. In the present study, we asked the question whether such time dilation effect depends on the actual physical motion of the object (spatiotopic coordinate), or its relative motion with respect to the retina (retinotopic coordinate). Observers were presented with a moving stimulus and a static reference stimulus in separate intervals, and judged which interval they perceived as having a longer duration, under conditions with eye fixation (Experiment 1) and with eye movement at same velocity as the moving stimulus (Experiment 2). The data indicated that the perceived duration was longer under object motion, and depended on the actual movement of the object rather than relative retinal motion. These results are in support with the notion that the brain possesses a spatiotopic representation regarding the real world positions of objects in which the perception of time is associated with. |
A. J. Austin; Theodora Duka Mechanisms of attention to conditioned stimuli predictive of a cigarette outcome Journal Article In: Behavioural Brain Research, vol. 232, no. 1, pp. 183–189, 2012. @article{Austin2012, Attention to stimuli associated with a rewarding outcome may be mediated by the incentive motivational properties that the stimulus acquires during conditioning. Other theories of attention state that the prediction error (the discrepancy between the expected and the actual outcome) during conditioning guides attention; once the outcome is fully predicted, attention should be abolished for the conditioned stimulus. The current study examined which of these mechanisms is dominant in conditioning when the outcome is highly rewarding. Allocation of attention to stimuli associated with cigarettes (the rewarding outcome) was tested in 16 smokers, who underwent a classical conditioning paradigm, where abstract visual stimuli were paired with a tobacco outcome. Stimuli were associated with 100% (stimulus A), 50% (stimulus B), or 0% (stimulus C) probability of receiving tobacco. Attention was measured using an eye-tracker device, and the appetitive value of the stimuli was measured with subjective pleasantness ratings during the conditioning process. Dwell time bias (duration of eye gaze) was greatest overall for the A stimulus, and increased over conditioning. Attention to stimulus A was dependent on the ratings of pleasantness that the stimulus evoked, and on the desire to smoke. These findings appear to support the theory that attention for conditioned stimuli is dominated by the incentive motivational qualities of the outcome they predict, and implicate a role for attention in the maintenance of addictive behaviours like smoking. |
Shahin Nasr; Roger B. H. Tootell A cardinal orientation bias in scene-selective visual cortex Journal Article In: Journal of Neuroscience, vol. 32, no. 43, pp. 14921–14926, 2012. @article{Nasr2012, It has long been known that human vision is more sensitive to contours at cardinal (horizontal and vertical) orientations, compared with oblique orientations; this is the "oblique effect." However, the real-world relevance of the oblique effect is not well understood. Experiments here suggest that this effect is linked to scene perception, via a common bias in the image statistics of scenes. This statistical bias for cardinal orientations is found in many "carpentered environments" such as buildings and indoor scenes, and some natural scenes. In Experiment 1, we confirmed the presence of a perceptual oblique effect in a specific set of scene stimuli. Using those scenes, we found that a well known "scene-selective" visual cortical area (the parahippocampal place area; PPA) showed distinctively higher functional magnetic resonance imaging (fMRI) activity to cardinal versus oblique orientations. This fMRI-based oblique effect was not observed in other cortical areas (including scene-selective areas transverse occipital sulcus and retrosplenial cortex), although all three scene-selective areas showed the expected inversion effect to scenes. Experiments 2 and 3 tested for an analogous selectivity for cardinal orientations using computer-generated arrays of simple squares and line segments, respectively. The results confirmed the preference for cardinal orientations in PPA, thus demonstrating that the oblique effect can also be produced in PPA by simple geometrical images, with statistics similar to those in scenes. Thus, PPA shows distinctive fMRI selectivity for cardinal orientations across a broad range of stimuli, which may reflect a perceptual oblique effect. |
Yaqing Niu; Rebecca M. Todd; Matthew Kyan; Adam K. Anderson Visual and emotional salience influence eye movements Journal Article In: ACM Transactions on Applied Perception, vol. 9, no. 3, pp. 1–18, 2012. @article{Niu2012, In natural vision both stimulus features and cognitive/affective factors influence an observer's attention. However, the relationship between stimulus-driven (bottom-up) and cognitive/affective (top-down) factors remains controversial: How well does the classic visual salience model account for gaze locations? Can emotional salience counteract strong visual stimulus signals and shift attention allocation irrespective of bottom-up features? Here we compared Itti and Koch's [2000] and Spectral Residual (SR) visual salience model and explored the impact of visual salience and emotional salience on eye movement behavior, to understand the competition between visual salience and emotional salience and how they affect gaze allocation in complex scenes viewing. Our results show the insufficiency of visual salience models in predicting fixation. Emotional salience can override visual salience and can determine attention allocation in complex scenes. These findings are consistent with the hypothesis that cognitive/affective factors play a dominant role in active gaze control. |
Laura R. Novick; Andrew T. Stull; Kefyn M. Catley Reading phylogenetic trees: The effects of tree orientation and text processing on comprehension Journal Article In: BioScience, vol. 62, no. 8, pp. 757–764, 2012. @article{Novick2012, Although differently formatted cladograms (hierarchical diagrams depicting evolutionary relationships among taxa) depict the same information, they may not be equally easy to comprehend. Undergraduate biology students attempted to translate cladograms from the diagonal to the rectangular format. The "backbone" line of each diagonal cladogram was slanted either up or down to the right. Eye movement analyses indicated that the students had a general bias to scan from left to right. Their scanning direction also depended on the orientation of the "backbone" line, resulting in upward or downward scanning, following the directional slant of the line. Because scanning down facilitates correct interpretation of the nested relationships, translation accuracy was higher for the down than for the up cladograms. Unfortunately, most diagonal cladograms in textbooks are in the upward orientation. This probably impairs students' success at tree thinking (i.e., interpreting and reasoning about evolutionary relationships depicted in cladograms), an important twenty-first century skill. |
Lauri Nummenmaa; Jari K. Hietanen; Pekka Santtila; Jukka Hyönä Gender and visibility of sexual cues influence eye movements while viewing faces and bodies Journal Article In: Archives of Sexual Behavior, vol. 41, no. 6, pp. 1439–1451, 2012. @article{Nummenmaa2012, Faces and bodies convey important information for the identification of potential sexual partners, yet clothing typically covers many of the bodily cues relevant for mating and reproduction. In this eye tracking study, we assessed how men and women viewed nude and clothed, same and opposite gender human figures. We found that participants inspected the nude bodies more thoroughly. First fixations landed almost always on the face, but were subsequently followed by viewing of the chest and pelvic regions. When viewing nude images, fixations were biased away from the face towards the chest and pelvic regions. Fixating these regions was also associated with elevated physiological arousal. Overall, men spent more time looking at female than male stimuli, whereas women looked equally long at male and female stimuli. In comparison to women, men spent relatively more time looking at the chests of nude female stimuli whereas women spent more time looking at the pelvic/genital region of male stimuli. We propose that the augmented and gender-contingent visual scanning of nude bodies reflects selective engagement of the visual attention circuits upon perception of signals relevant to choosing a sexual partner, which supports mating and reproduction. |
Morgan D. Barense; Iris I. A. Groen; Andy C. H. Lee; Lok-Kin Yeung; Sinead M. Brady; Mariella Gregori; Narinder Kapur; Timothy J. Bussey; Lisa M. Saksida; Richard N. A. Henson Intact memory for irrelevant information impairs perception in amnesia Journal Article In: Neuron, vol. 75, no. 1, pp. 157–167, 2012. @article{Barense2012, Memory and perception have long been considered separate cognitive processes, and amnesia resulting from medial temporal lobe (MTL) damage is thought to reflect damage to a dedicated memory system. Recent work has questioned these views, suggesting that amnesia can result from impoverished perceptual representations in the MTL, causing an increased susceptibility to interference. Using a perceptual matching task for which fMRI implicated a specific MTL structure, the perirhinal cortex, we show that amnesics with MTL damage including the perirhinal cortex, but not those with damage limited to the hippocampus, were vulnerable to object-based perceptual interference. Importantly, when we controlled such interference, their performance recovered to normal levels. These findings challenge prevailing conceptions of amnesia, suggesting that effects of damage to specific MTL regions are better understood not in terms of damage to a dedicated declarative memory system, but in terms of impoverished representations of the stimuli those regions maintain. © 2012 Elsevier Inc. |
Markus Bauer; Thomas Akam; Sabine Joseph; Elliot Freeman; Jon Driver Does visual flicker phase at gamma frequency modulate neural signal propagation and stimulus selection? Journal Article In: Journal of Vision, vol. 12, no. 4, pp. 1–10, 2012. @article{Bauer2012, Oscillatory synchronization of neuronal populations has been proposed to play a role in perceptual integration and attentional processing. However, some conflicting evidence has been found with respect to its causal relevance for sensory processing, particularly when using flickering visual stimuli with the aim of driving oscillations. We tested psychophysically whether the relative phase of gamma frequency flicker (60 Hz) between stimuli modulates well-known facilitatory lateral interactions between collinear Gabor patches (Experiment 1) or crowding of a peripheral target by irrelevant distractors (Experiment 2). Experiment 1 assessed the impact of suprathreshold Gabor flankers on detection of a near-threshold central Gabor target ("Lateral interactions paradigm"). The flanking stimuli could flicker either in phase or in anti-phase with each other. The typical facilitation of target detection was found with collinear flankers, but this was unaffected by flicker phase. Experiment 2 employed a "crowding" paradigm, where orientation discrimination of a peripheral target Gabor patch is disrupted when surrounded by irrelevant distractors. We found the usual crowding effect, which declined with spatial separation, but this was unaffected by relative flicker phase between target and distractors at all separations. These results imply that externally driven manipulations of gamma frequency phase cannot modulate perceptual integration in vision. |
Oliver Baumann; Jason B. Mattingley Functional topography of primary emotion processing in the human cerebellum Journal Article In: NeuroImage, vol. 61, no. 4, pp. 805–811, 2012. @article{Baumann2012, The cerebellum has an important role in the control and coordination of movement. It is now clear, however, that the cerebellum is also involved in neural processes underlying a wide variety of perceptual and cognitive functions, including the regulation of emotional responses. Contemporary neurobiological models of emotion assert that a small set of discrete emotions are mediated through distinct cortical and subcortical areas. Given the connectional specificity of neural pathways that link the cerebellum with these areas, we hypothesized that distinct sub-regions of the cerebellum might subserve the processing of different primary emotions. We used functional magnetic resonance imaging (fMRI) to identify neural activity patterns within the cerebellum in 30 healthy human volunteers as they categorized images that elicited each of the five primary emotions: happiness, anger, disgust, fear and sadness. In support of our hypothesis, all five emotions evoked spatially distinct patterns of activity in the posterior lobe of the cerebellum. We also detected overlaps between cerebellar activations for particular emotion categories, implying the existence of shared neural networks. By providing a detailed map of the functional topography of emotion processing in the cerebellum, our study provides important clues to the diverse effects of cerebellar pathology on human affective function. |
Valerie M. Beck; Andrew Hollingworth; Steven J. Luck Simultaneous control of attention by multiple working memory representations Journal Article In: Psychological Science, vol. 23, no. 8, pp. 887–898, 2012. @article{Beck2012, Working memory representations play a key role in controlling attention by making it possible to shift attention to task-relevant objects. Visual working memory has a capacity of three to four objects, but recent studies suggest that only one representation can guide attention at a given moment. We directly tested this proposal by monitoring eye movements while observers performed a visual search task in which they attempted to limit attention to objects drawn in two colors. When the observers were motivated to attend to one color at a time, they searched many consecutive items of one color (long run lengths) and exhibited a delay prior to switching gaze from one color to the other (switch cost). In contrast, when they were motivated to attend to both colors simultaneously, observers' gaze switched back and forth between the two colors frequently (short run lengths), with no switch cost. Thus, multiple working memory representations can concurrently guide attention. |
Sara A. Beedie; Philip J. Benson; Ina Giegling; Dan Rujescu; David M. St. Clair Smooth pursuit and visual scanpaths: Independence of two candidate oculomotor risk markers for schizophrenia Journal Article In: The World Journal of Biological Psychiatry, vol. 13, no. 3, pp. 200–210, 2012. @article{Beedie2012, Objectives. Smooth pursuit and visual scanpath deficits are candidate trait markers for schizophrenia. It is not clear whether eye tracking dysfunction (ETD) and atypical scanpath behaviour are the product of the same underlying neurobiological processes. We have examined co-occurrence of ETD and scanpath disturbance in individuals with schizophrenia and healthy volunteers. Methods. Eye movements of individuals with schizophrenia (N = 96) and non-clinical age-matched comparison participants (N = 100) were recorded using non-invasive infrared oculography during smooth pursuit in both predictable (horizontal sinusoid) and less predictable (Lissajous sinusoid) conditions and a free viewing scanpath task. Results. Individuals with schizophrenia demonstrated scanning deficits in both tasks. There was no association between performance measures of smooth pursuit and scene scanpaths in patient or control groups. Odds ratios comparing the likelihood of scanpath dysfunction when ETD was present, and the likelihood of finding scanpath dysfunction when ETD was absent were not significant in patients or controls in either pursuit variant, suggesting that ETD and scanpath dysfunction are independent anomalies in schizophrenia. Conclusion. ETD and scanpath disturbance appear to reflect independent oculomotor or neurocognitive deficits in schizophrenia. Each task may confer unique information about the pathophysiology of psychosis. © 2012 Informa Healthcare. |
Artem V. Belopolsky; Jan Theeuwes Updating the premotor theory: The allocation of attention is not always accompanied by saccade preparation Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 4, pp. 902–914, 2012. @article{Belopolsky2012, There is an ongoing controversy regarding the relationship between covert attention and saccadic eye movements. While there is quite some evidence that the preparation of a saccade is obligatory preceded by a shift of covert attention, the reverse is not clear: Is allocation of attention always accompanied by saccade preparation? Recently, a shifting and maintenance account was proposed suggesting that shifting and maintenance components of covert attention differ in their relation to the oculomotor system. Specifically, it was argued that a shift of covert attention is always accompanied by activation of the oculomotor program, while maintaining covert attention at a location can be accompanied either by activation or suppression of oculomotor program, depending on the probability of executing an eye movement to the attended location. In the present study we tested whether there is such an obligatory coupling between shifting of attention and saccade preparation and how quickly saccade preparation gets suppressed. The results showed that attention shifting was always accompanied by saccade preparation whenever covert attention had to be shifted during visual search, as well as in response to exogenous or endogenous cues. However, for the endogenous cues the saccade program to the attended location was suppressed very soon after the attention shift was completed. The current findings support the shifting and maintenance account and indicate that the premotor theory needs to be updated to include a shifting and maintenance component for the cases in which covert shifts of attention are made without the intention to execute a saccade. |
Valerie Benson; Magdalena Ietswaart; David Milner Eye movements and verbal report in a single case of visual neglect Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e43743, 2012. @article{Benson2012b, In this single case study, visuospatial neglect patient P1 demonstrated a dissociation between an intact ability to make appropriate reflexive eye movements to targets in the neglected field with latencies of <400 ms, while failing to report targets presented at such durations in a separate verbal detection task. In contrast, there was a failure to evoke the usually robust Remote Distractor Effect in P1, even though distractors in the neglected field were presented at above threshold durations. Together those data indicate that the tight coupling that is normally shown between attention and eye movements appears to be disrupted for low-level orienting in P1. A comparable disruption was also found for high-level cognitive processing tasks, namely reading and scene scanning. The findings are discussed in relation to sampling, attention and awareness in neglect. |
Hauke S. Meyerhoff; Korbinian Moeller; Kolja Debus; Hans-Christoph Nuerk Multi-digit number processing beyond the two-digit number range: A combination of sequential and parallel processes Journal Article In: Acta Psychologica, vol. 140, no. 1, pp. 81–90, 2012. @article{Meyerhoff2012, Investigations of multi-digit number processing typically focus on two-digit numbers. Here, we aim to investigate the generality of results from two-digit numbers for four- and six-digit numbers. Previous studies on two-digit numbers mostly suggested a parallel processing of tens and units. In contrast, the few studies examining the processing of larger numbers suggest sequential processing of the individual constituting digits. In this study, we combined the methodological approaches of studies implying either parallel or sequential processing. Participants completed a number magnitude comparison task on two-, four-, and six-digit numbers including unit-decade compatible and incompatible differing digit pairs (e.g., 32_47, 3. <. 4 and 2. <. 7 vs. 37_52, 3. <. 5 but 7. >. 2, respectively) at all possible digit positions. Response latencies and fixation behavior indicated that sequential and parallel decomposition is not exclusive in multi-digit number processing. Instead, our results clearly suggested that sequential and parallel processing strategies seem to be combined when processing multi-digit numbers beyond the two-digit number range. To account for the results, we propose a chunking hypothesis claiming that multi-digit numbers are separated into chunks of shorter digit strings. While the different chunks are processed sequentially digits within these chunks are processed in parallel. |
Sébastien Miellet; Liingang He; Xin Zhou; Ju Lao; Roberto Caldara When East meets West: Gaze-contingent Blindspots abolish cultural diversity in eye movements for faces Journal Article In: Journal of Eye Movement Research, vol. 5, no. 2, pp. 1–12, 2012. @article{Miellet2012, Culture impacts on how people sample visual information for face processing. Westerners deploy fixations towards the eyes and the mouth to achieve face recognition. In contrast, Easterners reach equal performance by deploying more central fixations, suggesting an effective extrafoveal information use. However, this hypothesis has not been yet directly investigated, i.e. by providing only extrafoveal information to both groups of observers. We used a parametric gaze-contingent technique dynamically masking central vision - the Blindspot - with Western and Eastern observers during face recognition. Westerners shifted progressively towards the typical Eastern central fixation pattern with larger Blindspots, whereas Easterners were insensitive to the Blindspots. These observations clearly show that Easterners preferentially sample information extrafoveally for faces. Conversely, the Western data also show that culturally-dependent visuo-motor strategies can flexibly adjust to constrained visual situations. |
Lisa M. Soederberg Miller; Diana L. Cassady Making healthy food choices using nutrition facts panels. The roles of knowledge, motivation, dietary modifications goals, and age Journal Article In: Appetite, vol. 59, no. 1, pp. 129–139, 2012. @article{Miller2012, Nutrition facts panels (NFPs) contain a rich assortment of nutrition information and are available on most food packages. The importance of this information is potentially even greater among older adults due to their increased risk for diet-related diseases, as well as those with goals for dietary modifications that may impact food choice. Despite past work suggesting that knowledge and motivation impact attitudes surrounding and self-reported use of NFPs, we know little about how (i.e., strategies used) and how well (i.e., level of accuracy) younger and older individuals process NFP information when evaluating healthful qualities of foods. We manipulated the content of NFPs and, using eye tracking methodology, examined strategies associated with deciding which of two NFPs, presented side-by-side, was healthier. We examined associations among strategy use and accuracy as well as age, dietary modification status, knowledge, and motivation. Results showed that, across age groups, those with dietary modification goals made relatively more comparisons between NFPs with increasing knowledge and motivation; but that strategy effectiveness (relationship to accuracy) depended on age and motivation. Results also showed that knowledge and motivation may protect against declines in accuracy in later life and that, across age and dietary modification status, knowledge mediates the relationship between motivation and decision accuracy. |
Milica Milosavljevic; Vidhya Navalpakkam; Christof Koch; Antonio Rangel Relative visual saliency differences induce sizable bias in consumer choice Journal Article In: Journal of Consumer Psychology, vol. 22, no. 1, pp. 67–74, 2012. @article{Milosavljevic2012, Consumers often need to make very rapid choices among multiple brands (e.g., at a supermarket shelf) that differ both in their reward value (e.g., taste) and in their visual properties (e.g., color and brightness of the packaging). Since the visual properties of stimuli are known to influence visual attention, and attention is known to influence choices, this gives rise to a potential visual saliency bias in choices. We utilize experimental design from visual neuroscience in three real food choice experiments to measure the size of the visual saliency bias and how it changes with decision speed and cognitive load. Our results show that at rapid decision speeds visual saliency influences choices more than preferences do, that the bias increases with cognitive load, and that it is particularly strong when individuals do not have strong preferences among the options. |
Daniel Mirman; Kristen M. Graziano Individual differences in the strength of taxonomic versus thematic relations Journal Article In: Journal of Experimental Psychology: General, vol. 141, no. 4, pp. 601–609, 2012. @article{Mirman2012a, Knowledge about word and object meanings can be organized taxonomically (fruits, mammals, etc.) on the basis of shared features or thematically (eating breakfast, taking a dog for a walk, etc.) on the basis of participation in events or scenarios. An eye-tracking study showed that both kinds of knowledge are activated during comprehension of a single spoken word, even when the listener is not required to perform any active task. The results further revealed that an individual's relative activation of taxonomic relations compared to thematic relations predicts that individual's tendency to favor taxonomic over thematic relations when asked to choose between them in a similarity judgment task. These results indicate that individuals differ in the relative strengths of their taxonomic and thematic semantic knowledge and suggest that meaning information is organized in 2 parallel, complementary semantic systems. |
Amanda F. Moates; Elena I. Ivleva; Hugh B. O'Neill; Nithin Krishna; C. Munro Cullum; Gunvant K. Thaker; Carol A. Tamminga Predictive pursuit association with deficits in working memory in psychosis Journal Article In: Biological Psychiatry, vol. 72, no. 9, pp. 752–757, 2012. @article{Moates2012, Background: Deficits in smooth pursuit eye movements are an established phenotype for schizophrenia (SZ) and are being investigated as a potential liability marker for bipolar disorder. Although the molecular determinants of this deficit are still unclear, research has verified deficits in predictive pursuit mechanisms in SZ. Because predictive pursuit might depend on the working memory system, we have hypothesized a relationship between the two in healthy control subjects (HC) and SZ and here examine whether it extends to psychotic bipolar disorder (BDP). Methods: Volunteers with SZ (n = 38), BDP (n = 31), and HC (n = 32) performed a novel eye movement task to assess predictive pursuit as well as a standard visuospatial measure of working memory. Results: Individuals with SZ and BDP both showed reduced predictive pursuit gain compared with HC (p <.05). Moreover, each patient group showed worse performance in visuospatial working memory compared with control subjects (p <.05). A strong correlation (r =.53 |
Kristien Ooms; Gennady Andrienko; Natalia Andrienko; Philippe De Maeyer; Veerle Fack Analysing the spatial dimension of eye movement data using a visual analytic approach Journal Article In: Expert Systems with Applications, vol. 39, no. 1, pp. 1324–1332, 2012. @article{Ooms2012, Conventional analyses on eye movement data only take into account eye movement metrics, such as the number or the duration of fixations and length of the scanpaths, on which statistical analysis is performed for detecting significant differences. However, the spatial dimension in the eye movements is neglected, which is an essential element when investigating the design of maps. The study described in this paper uses a visual analytics software package, the Visual Analytics Toolkit, to analyse the eye movement data. Selection, simplification and aggregation functions are applied to filter out meaningful subsets of the data to be able to recognise structures in the movement data. Visualising and analysing these patterns provides essential insights in the user's search strategies while working on a (n interactive) map. |
José P. Ossandón; Selim Onat; Dario Cazzoli; Thomas Nyffeler; René M. Müri; Peter König Unmasking the contribution of low-level features to the guidance of attention Journal Article In: Neuropsychologia, vol. 50, no. 14, pp. 3478–3487, 2012. @article{Ossandon2012, The role of low-level stimulus-driven control in the guidance of overt visual attention has been difficult to establish because low- and high-level visual content are spatially correlated within natural visual stimuli. Here we show that impairment of parietal cortical areas, either permanently by a lesion or reversibly by repetitive transcranial magnetic stimulation (rTMS), leads to fixation of locations with higher values of low-level features as compared to control subjects or in a no-rTMS condition. Moreover, this unmasking of stimulus-driven control crucially depends on the intrahemispheric balance between top-down and bottom-up cortical areas. This result suggests that although in normal behavior high-level features might exert a strong influence, low-level features do contribute to guide visual selection during the exploration of complex natural stimuli. |
Mathias Abegg; Nishant Sharma; Jason J. S. Barton Antisaccades generate two types of saccadic inhibition Journal Article In: Biological Psychology, vol. 89, no. 1, pp. 191–194, 2012. @article{Abegg2012, To make an antisaccade away from a stimulus, one must also suppress the more reflexive prosaccade to the stimulus. Whether this inhibition is diffuse or specific for saccade direction is not known. We used a paradigm examining inter-trial carry-over effects. Twelve subjects performed sequences of four identical antisaccades followed by sequences of four prosaccades randomly directed at the location of the antisaccade stimulus, the location of the antisaccade goal, or neutral locations. We found two types of persistent antisaccade-related inhibition. First, prosaccades in any direction were delayed only in the first trial after the antisaccades. Second, prosaccades to the location of the antisaccade stimulus were delayed more than all other prosaccades, and this persisted from the first to the fourth subsequent trial. These findings are consistent with both a transient global inhibition and a more sustained focal inhibition specific for the location of the antisaccade stimulus. |
Jos J. Adam; Simona Buetti; Dirk Kerzel Coordinated flexibility: How initial gaze position modulates eye-hand coordination and reaching Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 4, pp. 891–901, 2012. @article{Adam2012, Reaching to targets in space requires the coordination of eye and hand movements. In two experiments, we recorded eye and hand kinematics to examine the role of gaze position at target onset on eye-hand coordination and reaching performance. Experiment 1 showed that with eyes and hand aligned on the same peripheral start location, time lags between eye and hand onsets were small and initiation times were substantially correlated, suggesting simultaneous control and tight eye-hand coupling. With eyes and hand departing from different start locations (gaze aligned with the center of the range of possible target positions), time lags between eye and hand onsets were large and initiation times were largely uncorrelated, suggesting independent control and decoupling of eye and hand movements. Furthermore, initial gaze position strongly mediated manual reaching performance indexed by increments in movement time as a function of target distance. Experiment 2 confirmed the impact of target foveation in modulating the effect of target distance on movement time. Our findings reveal the operation of an overarching, flexible neural control system that tunes the operation and cooperation of saccadic and manual control systems depending on where the eyes look at target onset. |
Robert Adam; Paul M. Bays; Masud Husain Rapid decision-making under risk Journal Article In: Cognitive Neuroscience, vol. 3, no. 1, pp. 52–61, 2012. @article{Adam2012a, Impulsivity is often characterized by rapid decisions under risk, but most current tests of decision-making do not impose time pressures on participants' choices. Here we introduce a new Traffic Lights test which requires people to choose whether to programme a risky, early eye movement before a traffic light turns green (earning them high rewards or a penalty) or wait for the green light before responding to obtain a small reward instead. Young participants demonstrated bimodal responses: an early, high-risk and a later, low-risk set of choices. By contrast, elderly people invariably waited for the green light and showed little risk-taking. Performance could be modelled as a race between two rise-to-threshold decision processes, one triggered by the green light and the other initiated before it. The test provides a useful measure of rapid decision-making under risk, with the potential to reveal how this process alters with aging or in patient groups. |
Nicholas D. Smith; David P. Crabb; Fiona C. Glen; Robyn Burton; David F. Garway-Heath Eye movements in patients with glaucoma when viewing images of everyday scenes Journal Article In: Seeing and Perceiving, vol. 25, no. 5, pp. 471–492, 2012. @article{Smith2012a, This study tests the hypothesis that patients with bilateral glaucoma exhibit different eye movements compared to normally-sighted people when viewing computer displayed photographs of everyday scenes. Thirty glaucomatous patients and 30 age-related controls with normal vision viewed images on a computer monitor whilst eye movements were simultaneously recorded using an eye tracking system. The patients demonstrated a significant reduction in the average number of saccades compared to controls (P = 0.02; mean reduction of 7% (95% confidence interval (CI): 311%)). There was no difference in average saccade amplitude between groups but there was between-person variability in patients. The average elliptical region scanned by the patients by a bivariate contour ellipse area (BCEA) analysis, was more restricted compared to controls (P = 0.004; mean reduction of 23% (95% (CI): 1135%)). A novel analysis mapping areas of interest in the images indicated a weak association between severity of functional deficit and a tendency to not view regions typically viewed by the controls. In conclusion, some eye movements in some patients with bilateral glaucomatous defects differ from normal-sighted people of a similar age when viewing images of everyday scenes, providing evidence for a potential new window for looking into the functional consequences of the disease. |
Tim J. Smith The attentional theory of cinematic continuity Journal Article In: Projections, vol. 6, no. 1, pp. 1–27, 2012. @article{Smith2012b, The intention of most film editing is to create the impression of con- tinuity by editing together discontinuous viewpoints. The continuity editing rules are well established yet there exists an incomplete understanding of their cognitive foundations. This article presents the Attentional Theory of Cinematic Continuity (AToCC), which identifies the critical role visual attention plays in the perception of continuity across cuts and demonstrates how perceptual expectations can be matched across cuts without the need for a coherent representation of the depicted space. The theory explains several key elements of the continuity editing style including match-action, matched-exit/entrances, shot/reverse-shot, the 180° rule, and point-of-view editing. AToCC formalizes insights about viewer cognition that have been latent in the filmmaking community for nearly a century and demonstrates how much vision science in general can learn from film. |
Tim J. Smith; Peter Lamont; John M. Henderson The penny drops: Change blindness at fixation Journal Article In: Perception, vol. 41, no. 4, pp. 489–492, 2012. @article{Smith2012c, Our perception of the visual world is fallible. Unattended objects may change without us noticing as long as the change does not capture attention (change blindness). However, it is often assumed that changes to a fixated object will be noticed if it is attended. In this experiment we demonstrate that participants fail to detect a change in identity of a coin during a magic trick even though eyetracking indicates that the coin is tracked by the eyes throughout the trick. The change is subsequently detected when participants are instructed to look for it. These results suggest that during naturalistic viewing, attention can be focused on an object at fixation without including all of its features. |
Grayden J. F. Solman; J. Allan Cheyne; Daniel Smilek Found and missed: Failing to recognize a search target despite moving it Journal Article In: Cognition, vol. 123, no. 1, pp. 100–118, 2012. @article{Solman2012a, We present results from five search experiments using a novel 'unpacking' paradigm in which participants use a mouse to sort through random heaps of distractors to locate the target. We report that during this task participants often fail to recognize the target despite moving it, and despite having looked at the item. Additionally, the missed target item appears to have been processed as evidenced by post-error slowing of individual moves within a trial. The rate of this 'unpacking error' was minimally affected by set size and dual task manipulations, but was strongly influenced by perceptual difficulty and perceptual load. We suggest that the error occurs because of a dissociation between perception for action and perception for identification, providing further evidence that these processes may operate relatively independently even in naturalistic contexts, and even in settings like search where they should be expected to act in close coordination. |
Grayden J. F. Solman; Daniel Smilek Memory benefits during visual search depend on difficulty Journal Article In: Journal of Cognitive Psychology, vol. 24, no. 6, pp. 689–702, 2012. @article{Solman2012, In three experiments we explored whether memory for previous locations of search items influences search efficiency more as the difficulty of exhaustive search increases. Difficulty was manipulated by varying item eccentricity and item similarity (discriminability). Participants searched through items placed at three levels of eccentricity. The search displays were either identical on every trial (repeated condition) or the items were randomly reorganised from trial to trial (random condition), and search items were either relatively easy or difficult to discriminate from each other. Search was both faster and more efficient (i.e., search slopes were shallower) in the repeated condition than in the random condition. More importantly, this advantage for repeated displays was greater (1) for items that were more difficult to discriminate and (2) for eccentric targets when items were easily discriminable. Thus, increasing target eccentricity and reducing item discriminability both increase the influence of memory during search. |
David Souto; Dirk Kerzel Like a rolling stone: Naturalistic kinematics influence tracking eye movements Journal Article In: Journal of Vision, vol. 12, no. 9, pp. 1–12, 2012. @article{Souto2012, Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information aSouto, D., & Kerzel, D. (2012). Like a rolling stone: naturalistic kinematics influence tracking eye movements. Journal of Vision, 12(9), 997–997. https://doi.org/10.1167/12.9.997bout object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics. |
Miriam Spering; Marisa Carrasco Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness Journal Article In: Journal of Neuroscience, vol. 32, no. 22, pp. 7594–7601, 2012. @article{Spering2012, Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. |
M. L. Reinholdt-Dunne; Karin Mogg; V. Benson; B. P. Bradley; M. G. Hardin; Simon P. Liversedge; Daniel S. Pine; M. Ernst Anxiety and selective attention to angry faces: An antisaccade study Journal Article In: Journal of Cognitive Psychology, vol. 24, no. 1, pp. 54–65, 2012. @article{ReinholdtDunne2012, Cognitive models of anxiety propose that anxiety is associated with an attentional bias for threat, which increases vulnerability to emotional distress and is difficult to control. The study aim was to investigate relationships between the effects of threatening information, anxiety, and attention control on eye movements. High and low trait anxious individuals performed antisaccade and prosaccade tasks with angry, fearful, happy, and neutral faces. Results indicated that high-anxious participants showed a greater antisaccade cost for angry than neutral faces (i.e., relatively slower to look away from angry faces), compared with low-anxious individuals. This bias was not found for fearful or happy faces. The bias for angry faces was not related to individual differences in attention control assessed on self-report and behavioural measures. Findings support the view that anxiety is associated with difficulty in using cognitive control resources to inhibit attentional orienting to angry faces, and that attention control is multifaceted. |
Helen J. Richards; Valerie Benson; Julie A. Hadwin The attentional processes underlying impaired inhibition of threat in anxiety: The remote distractor effect Journal Article In: Cognition and Emotion, vol. 26, no. 5, pp. 934–942, 2012. @article{Richards2012, The current study explored the proposition that anxiety is associated with impaired inhibition of threat. Using a modified version of the remote distractor paradigm, we considered whether this impairment is related to attentional capture by threat, difficulties disengaging from threat presented within foveal vision, or difficulties orienting to task-relevant stimuli when threat is present in central, parafoveal and peripheral locations in the visual field. Participants were asked to direct their eyes towards and identify a target in the presence and absence of a distractor (an angry, happy or neutral face). Trait anxiety was associated with a delay in initiating eye movements to the target in the presence of central, parafoveal and peripheral threatening distractors. These findings suggest that elevated anxiety is linked to difficulties inhibiting task-irrelevant threat presented across a broad region of the visual field. |
Gerulf Rieger; Ritch C. Savin-Williams The eyes have it: Sex and sexual orientation differences in pupil dilation patterns Journal Article In: PLoS ONE, vol. 7, no. 8, pp. e40256, 2012. @article{Rieger2012, Recent research suggests profound sex and sexual orientation differences in sexual response. These results, however, are based on measures of genital arousal, which have potential limitations such as volunteer bias and differential measures for the sexes. The present study introduces a measure less affected by these limitations. We assessed the pupil dilation of 325 men and women of various sexual orientations to male and female erotic stimuli. Results supported hypotheses. In general, self-reported sexual orientation corresponded with pupil dilation to men and women. Among men, substantial dilation to both sexes was most common in bisexual-identified men. In contrast, among women, substantial dilation to both sexes was most common in heterosexual-identified women. Possible reasons for these differences are discussed. Because the measure of pupil dilation is less invasive than previous measures of sexual response, it allows for studying diverse age and cultural populations, usually not included in sexuality research. |
Hector Rieiro; Susana Martinez-Conde; Andrew P. Danielson; Jose L. Pardo-Vazquez; Nishit Srivastava; Stephen L. Macknik Optimizing the temporal dynamics of light to human perception Journal Article In: Proceedings of the National Academy of Sciences, vol. 109, no. 48, pp. 19828–19833, 2012. @article{Rieiro2012, No previous research has tuned the temporal characteristics of light-emitting devices to enhance brightness perception in human vision, despite the potential for significant power savings. The role of stimulus duration on perceived contrast is unclear, due to contradiction between the models proposed by Bloch and by Broca and Sulzer over 100 years ago. We propose that the discrepancy is accounted for by the observer's "inherent expertise bias," a type of experimental bias in which the observer's life-long experience with interpreting the sensory world overcomes perceptual ambiguities and biases experimental outcomes. By controlling for this and all other known biases, we show that perceived contrast peaks at durations of 50-100 ms, and we conclude that the Broca-Sulzer effect best describes human temporal vision. We also show that the plateau in perceived brightness with stimulus duration, described by Bloch's law, is a previously uncharacterized type of temporal brightness constancy that, like classical constancy effects, serves to enhance object recognition across varied lighting conditions in natural vision-although this is a constancy effect that normalizes perception across temporal modulation conditions. A practical outcome of this study is that tuning light-emitting devices to match the temporal dynamics of the human visual system's temporal response function will result in significant power savings. |
Evan F. Risko; Nicola C. Anderson; Sophie Lanthier; Alan Kingstone Curious eyes: Individual differences in personality predict eye movement behavior in scene-viewing Journal Article In: Cognition, vol. 122, no. 1, pp. 86–90, 2012. @article{Risko2012, Visual exploration is driven by two main factors - the stimuli in our environment, and our own individual interests and intentions. Research investigating these two aspects of attentional guidance has focused almost exclusively on factors common across individuals. The present study took a different tack, and examined the role played by individual differences in personality. Our findings reveal that trait curiosity is a robust and reliable predictor of an individual's eye movement behavior in scene-viewing. These findings demonstrate that who a person is relates to how they move their eyes. |
Solveiga Stonkute; Jochen Braun; Alexander Pastukhov The role of attention in ambiguous reversals of structure-from-motion Journal Article In: PLoS ONE, vol. 7, no. 5, pp. e37734, 2012. @article{Stonkute2012, Multiple dots moving independently back and forth on a flat screen induce a compelling illusion of a sphere rotating in depth (structure-from-motion). If all dots simultaneously reverse their direction of motion, two perceptual outcomes are possible: either the illusory rotation reverses as well (and the illusory depth of each dot is maintained), or the illusory rotation is maintained (but the illusory depth of each dot reverses). We investigated the role of attention in these ambiguous reversals. Greater availability of attention–as manipulated with a concurrent task or inferred from eye movement statistics–shifted the balance in favor of reversing illusory rotation (rather than depth). On the other hand, volitional control over illusory reversals was limited and did not depend on tracking individual dots during the direction reversal. Finally, display properties strongly influenced ambiguous reversals. Any asymmetries between 'front' and 'back' surfaces–created either on purpose by coloring or accidentally by random dot placement–also shifted the balance in favor of reversing illusory rotation (rather than depth). We conclude that the outcome of ambiguous reversals depends on attention, specifically on attention to the illusory sphere and its surface irregularities, but not on attentive tracking of individual surface dots. |
Michael J. Stroud; Tamaryn Menneer; Kyle R. Cave; Nick Donnelly Using the dual-target cost to explore the nature of search target representations Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 38, no. 1, pp. 113–122, 2012. @article{Stroud2012, Eye movements were monitored to examine search efficiency and infer how color is mentally represented to guide search for multiple targets. Observers located a single color target very efficiently by fixating colors similar to the target. However, simultaneous search for 2 colors produced a dual-target cost. In addition, as the similarity between the 2 target colors decreased, search efficiency suffered, resulting in more fixations on colors dissimilar to both target colors, which we describe as a "split-target cost." The patterns of fixations provide evidence to the type of mental representations guiding search. When the 2 targets are dissimilar, they are apparently encoded as separate and discrete representations. The fixation patterns for more similar targets can be explained with either 2 discrete target representations or a single, unitary range containing the target colors as well as the colors between them in color space. |
Yin Su; Li-Lin Rao; Xingshan Li; Yong Wang; Shu Li From quality to quantity: The role of common features in consumer preference Journal Article In: Journal of Economic Psychology, vol. 33, no. 6, pp. 1043–1058, 2012. @article{Su2012, Although previous studies of consumer choice have found that common features of alternatives are cancelled and that choices are based only on unique features, a recent study has suggested that common features are canceled only when they are irrelevant in regard to all unique features. The present study hypothesized that the role of a common feature in consumer choice depends on its quantity as well as its quality. Experiments 1 and 2 tested this hypothesis and the equate-to-differentiate account by varying the quality and the quantity of common features. Experiment 3 examined the cognitive process that was proposed to serve as the mechanism for the common feature effect using eye-tracking methodology. This study provided further insight into conditions when the cancellation-and-focus model applies. Study results revealed an attribute-based tradeoff process underlying multiple-attribute decision making, and suggested an avenue through which marketers might influence consumer choices. |
Martin Szinte; Patrick Cavanagh Apparent motion from outside the visual field, retinotopic cortices may register extra-retinal positions Journal Article In: PLoS ONE, vol. 7, no. 10, pp. e47386, 2012. @article{Szinte2012a, Observers made a saccade between two fixation markers while a probe was flashed sequentially at two locations on a side screen. The first probe was presented in the far periphery just within the observer's visual field. This target was extinguished and the observers made a large saccade away from the probe, which would have left it far outside the visual field if it had still been present. The second probe was then presented, displaced from the first in the same direction as the eye movement and by about the same distance as the saccade step. Because both eyes and probes shifted by similar amounts, there was little or no shift between the first and second probe positions on the retina. Nevertheless, subjects reported seeing motion corresponding to the spatial displacement not the retinal displacement. When the second probe was presented, the effective location of the first probe lay outside the visual field demonstrating that apparent motion can be seen from a location outside the visual field to a second location inside the visual field. Recent physiological results suggest that target locations are "remapped" on retinotopic representations to correct for the effects of eye movements. Our results suggest that the representations on which this remapping occurs include locations that fall beyond the limits of the retina. |
Martin Szinte; Mark Wexler; Patrick Cavanagh Temporal dynamics of remapping captured by peri-saccadic continuous motion Journal Article In: Journal of Vision, vol. 12, no. 7, pp. 1–18, 2012. @article{Szinte2012, Different attention and saccade control areas contribute to space constancy by remapping target activity onto their expected post-saccadic locations. To visualize this dynamic remapping, we used a technique developed by Honda (2006) where a probe moved vertically while participants made a saccade across the motion path. Observers do not report any large excursions of the trace at the time of the saccade that would correspond to the classical peri-saccadic mislocalization effect. Instead, they reported that the motion trace appeared to be broken into two separate segments with a shift of approximately one-fifth of the saccade amplitude representing an overcompensation of the expected retinal displacement caused by the saccade. To measure the timing of this break in the trace, we introduced a second, physical shift that was the same size but opposite in direction to the saccade-induced shift. The trace appeared continuous most frequently when the physical shift was introduced at the midpoint of the saccade, suggesting that the compensation is in place when the saccade lands. Moreover, this simple linear shift made the combined traces appear continuous and linear, with no curvature. In contrast, Honda (2006) had reported that the pre- and post-saccadic portion of the trace appeared aligned and that there was often a small, visible excursion of the trace at the time of the saccade. To compare our results more directly, we increased the contrast of our moving probe in a third experiment. Now some observers reported seeing a deviation in the motion path but the misalignment remained present. We conclude that the large deviations at the time of saccade are generally masked for a continuously moving target but that there is nevertheless a residual misalignment between pre- and post-saccadic coordinates of approximately 20% of the saccade amplitude that normally goes unnoticed. |
Hiomasa Takemura; Hiroshi Ashida; Kaoru Amano; Akiyoshi Kitaoka; Ikuya Murakami Neural correlates of induced motion perception in the human brain Journal Article In: Journal of Neuroscience, vol. 32, no. 41, pp. 14344–14354, 2012. @article{Takemura2012, A physically stationary stimulus surrounded by a moving stimulus appears to move in the opposite direction. There are similarities between the characteristics of this phenomenon of induced motion and surround suppression of directionally selective neurons in the brain. Here, functional magnetic resonance imaging was used to investigate the link between the subjective perception of induced motion and cortical activity. The visual stimuli consisted of a central drifting sinusoid surrounded by a moving random-dot pattern. The change in cortical activity in response to changes in speed and direction of the central stimulus was measured. The human cortical area hMT+ showed the greatest activation when the central stimulus moved at a fast speed in the direction opposite to that of the surround. More importantly, the activity in this area was the lowest when the central stimulus moved in the same direction as the surround and at a speed such that the central stimulus appeared to be stationary. The results indicate that the activity in hMT+ is related to perceived speed modulated by induced motion rather than to physical speed or a kinetic boundary. Early visual areas (V1, V2, V3, and V3A) showed a similar pattern; however, the relationship to perceived speed was not as clear as that in hMT+. These results suggest that hMT+ may be a neural correlate of induced motion perception and play an important role in contrasting motion signals in relation to their surrounding context and adaptively modulating our motion perception depending on the spatial context. |