EyeLink Cognitive Publications
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2017 |
Zhuohao Chen; Jinchen Du; Min Xiang; Yan Zhang; Shuyue Zhang Social exclusion leads to attentional bias to emotional social information: Evidence from eye movement Journal Article In: PLoS ONE, vol. 12, no. 10, pp. e0186313, 2017. @article{Chen2017d, Social exclusion has many effects on individuals, including the increased need to belong and elevated sensitivity to social information. Using a self-reporting method, and an eye-tracking technique, this study explored people's need to belong and attentional bias towards the socio-emotional information (pictures of positive and negative facial expressions compared to those of emotionally-neutral expressions) after experiencing a brief episode of social exclusion. We found that: (1) socially-excluded individuals reported higher negative emotions, lower positive emotions, and stronger need to belong than those who were not socially excluded; (2) compared to a control condition, social exclusion caused a longer response time to probe dots after viewing positive or negative face images; (3) social exclusion resulted in a higher frequency ratio of first attentional fixation on both positive and negative emotional facial pictures (but not on the neutral pictures) than the control condition; (4) in the social exclusion condition, participants showed shorter first fixation latency and longer first fixation duration to positive pictures than neutral ones but this effect was not observed for negative pictures; (5) participants who experienced social exclusion also showed longer gazing duration on the positive pictures than those who did not; although group differences also existed for the negative pictures, the gaze duration bias from both groups showed no difference from chance. This study demonstrated the emotional response to social exclusion as well as characterising multiple eye-movement indicators of attentional bias after experiencing social exclusion. |
Kyoung Whan Choe; Omid Kardan; Hiroki P. Kotabe; John M. Henderson; Marc G. Berman To search or to like: Mapping fixations to differentiate two forms of incidental scene memory Journal Article In: Journal of Vision, vol. 17, no. 12, pp. 1–22, 2017. @article{Choe2017, We employed eye-tracking to investigate how performing different tasks on scenes (e.g., intentionally memorizing them, searching for an object, evaluating aesthetic preference) can affect eye movements during encoding and subsequent scene memory.We found that scene memorability decreased after visual search (one incidental encoding task) compared to intentional memorization, and that preference evaluation (another incidental encoding task) produced better memory, similar to the incidental memory boost previously observed for words and faces. By analyzing fixation maps, we found that although fixation map similarity could explain how eye movements during visual search impairs incidental scene memory, it could not explain the incidental memory boost from aesthetic preference evaluation, implying that implicit mechanisms were at play. We conclude that not all incidental encoding tasks should be taken to be similar, as different mechanisms (e.g., explicit or implicit) lead to memory enhancements or decrements for different incidental encoding tasks. |
S. Bertleff; Gereon R. Fink; Ralph Weidner Attentional capture: Role of top-down focused spatial attention and the need to search among multiple locations Journal Article In: Visual Cognition, vol. 25, no. 1-3, pp. 326–342, 2017. @article{Bertleff2017, Top-down focused spatial attention can counteract bottom-up attentional capture of an irrelevant but salient distractor outside the attentional focus. The present behavioural study differentiates two alternative concepts accounting for the absence of attentional capture under top-down focused attention. In particular, top-down focused attention may counteract attentional capture by altering salience coding outside the focus of attention. Alternatively, spatially focusing on a pre-defined region of potential target locations may omit the need to search among multiple salience signals, thereby eliminating the tendency of unattended stimuli to compete for attentional selection and, hence, to capture attention. Spatial cues explicitly indicating a variable number of potential target locations preceded the additional singleton paradigm to gradually manipulate the need to search for a target (i.e., to select a target from an array of distractors) and to determine its effects on attentional capture. Attentional capture occurred only when a salient distractor was located at potential target locations and never occurred when located outside the attended spotlight. This finding was independent of the parametrical variations related to the need to search for the target, which did not modulate attentional capture either. Accordingly, our data suggest that the presence or absence of salience-based distraction of unattended stimuli is not per se affected by the need to select a target from multiple salience signals |
Jutta Billino; Jürgen Hennig; Karl R. Gegenfurtner In: Vision Research, vol. 141, no. 170-180, pp. 170–180, 2017. @article{Billino2017, The neural circuits involved in oculomotor control are well described; however, neuromodulation of eye movements is still hardly understood. Memory guided saccades have been extensively studied and in particular neurophysiological evidence from monkey studies points to a crucial functional role of prefrontal dopamine activity. We exploited individual differences in dopamine regulation due to the well established COMT (catechol-O-methyltransferase) Val158Met polymorphism to explore the link between prefrontal dopamine activity and memory guided saccades in healthy subjects. The COMT genotype is thought to modulate dopamine metabolism in prefrontal cortex producing differences in dopamine availability. We investigated memory guided saccades in 111 healthy subjects and determined individual genotypes. Accuracy and precision were reduced in subjects with putatively higher prefrontal dopamine levels. In contrast, we found no modulation of saccade parameters by genotype in a visually guided control task. Our results suggest that increased dopamine activity can have a detrimental effect on saccades that rely on spatial memory representations. Although these findings await replication in larger and more diverse sample sizes, they provide persuasive support that specific oculomotor parameters are sensitive to dopaminergic variation in healthy subjects and add to a better understanding of how dopamine modulates saccadic control. |
Caroline Blais; Daniel Fiset; Cynthia Roy; Camille Saumure Régimbald; Frédéric Gosselin Eye fixation patterns for categorizing static and dynamic facial expressions Journal Article In: Emotion, vol. 17, no. 7, pp. 1107–1119, 2017. @article{Blais2017, Facial expressions of emotion are dynamic in nature, but most studies on the visual strategies underlying the recognition of facial emotions have used static stimuli. The present study directly compared the visual strategies underlying the recognition of static and dynamic facial expressions using eye tracking and the Bubbles technique. The results revealed different eye fixation patterns with the 2 kinds of stimuli, with fewer fixations on the eye and mouth area during the recognition of dynamic than static expressions. However, these differences in eye fixations were not accompanied by any systematic differences in the facial information that was actually processed to recognize the expressions. |
Annabelle Blangero; Simon P. Kelly Neural signature of value-based sensorimotor prioritization in humans Journal Article In: Journal of Neuroscience, vol. 37, no. 44, pp. 10725–10737, 2017. @article{Blangero2017, In situations in which impending sensory events demand fast action choices, we must be ready to prioritize higher-value courses of action to avoid missed opportunities. When such a situation first presents itself, stimulus-action contingencies and their relative value must be encoded to establish a value-biased state of preparation for an impending sensorimotor decision. Here, we sought to identify neurophysiological signatures of such processes in the human brain (both female and male). We devised a task requiring fast action choices based on the discrimination of a simple visual cue in which the differently valued sensory alternatives were presented 750-800 ms before as peripheral "targets" that specified the stimulus-action mapping for the upcoming decision. In response to the targets, we identified a discrete, transient, spatially selective signal in the event-related potential (ERP), which scaled with relative value and strongly predicted the degree of behavioral bias in the upcoming decision both across and within subjects. This signal is not compatible with any hitherto known ERP signature of spatial selection and also bears novel distinctions with respect to characterizations of value-sensitive, spatially selective activity found in sensorimotor areas of nonhuman primates. Specifically, a series of follow-up experiments revealed that the signal was reliably invoked regardless of response laterality, response modality, sensory feature, and reward valence. It was absent, however, when the response deadline was relaxed and the strategic need for biasing removed. Therefore, more than passively representing value or salience, the signal appears to play a versatile and active role in adaptive sensorimotor prioritization. |
Anna K. Bobak; Benjamin A. Parris; Nicola J. Gregory; Rachel J. Bennetts; Sarah Bate Eye-movement strategies in developmental prosopagnosia and “super” face recognition Journal Article In: Quarterly Journal of Experimental Psychology, vol. 70, no. 2, pp. 201–217, 2017. @article{Bobak2017, Developmental prosopagnosia (DP) is a cognitive condition characterized by a severe deficit in face recognition. Few investigations have examined whether impairments at the early stages of processing may underpin the condition, and it is also unknown whether DP is simply the "bottom end" of the typical face-processing spectrum. To address these issues, we monitored the eye-movements of DPs, typical perceivers, and "super recognizers" (SRs) while they viewed a set of static images displaying people engaged in naturalistic social scenarios. Three key findings emerged: (a) Individuals with more severe prosopagnosia spent less time examining the internal facial region, (b) as observed in acquired prosopagnosia, some DPs spent less time examining the eyes and more time examining the mouth than controls, and (c) SRs spent more time examining the nose-a measure that also correlated with face recognition ability in controls. These findings support previous suggestions that DP is a heterogeneous condition, but suggest that at least the most severe cases represent a group of individuals that qualitatively differ from the typical population. While SRs seem to merely be those at the "top end" of normal, this work identifies the nose as a critical region for successful face recognition. |
Paul J. Boon; Artem V. Belopolsky Eye abduction reduces but does not eliminate competition in the oculomotor system Journal Article In: Journal of Vision, vol. 17, no. 5, pp. 1–10, 2017. @article{Boon2017, Although it is well established that there is a tight coupling between covert attention and the eye movement system there is an ongoing controversy whether this relationship is functional. Previous studies demonstrated that disrupting the ability to execute an eye movement interferes with the allocation of covert attention. One technique that prevents the execution of an eye movement involves the abduction of the eye in the orbit while presenting the stimuli outside of the effective oculomotor range (Craighero, Nascimben, & Fadiga, 2004). Although eye abduction is supposed to disrupt activation of the oculomotor program responsible for the shift of covert attention, this crucial assumption has never been tested experimentally. In the present study we used saccadic curvature to examine whether eye abduction eliminates the target-distractor competition in the oculomotor system. We experimentally reduced the ability to execute saccades by abducting the eye by 308 (monocular vision). This way the peripheral part of the temporal hemifield was located outside the oculomotor range. Participants made a vertical eye movement while on some trials a distractor was shown either inside or outside of the oculomotor range. The curvature away from distractors located outside the oculomotor range was reduced, but not completely eliminated. This confirms that eye abduction influences the activation of the oculomotor program, but points to the fact that other forms of motor planning, such as head movements are also represented in the oculomotor system. The results are in line with the idea that covert attention is an emerging property of movement planning, but is not restricted to saccade planning. |
Robert W. Booth; Bundy Mackintosh; Dinkar Sharma Working memory regulates trait anxiety-related threat processing biases Journal Article In: Emotion, vol. 17, no. 4, pp. 616–627, 2017. @article{Booth2017, High trait anxious individuals tend to show biased processing of threat. Correlational evidence suggests that executive control could be used to regulate such threat-processing. On this basis, we hypothesized that trait anxiety-related cognitive biases regarding threat should be exaggerated when executive control is experimentally impaired by loading working memory. In Study 1, 68 undergraduates read ambiguous vignettes under high and low working memory load; later, their interpretations of these vignettes were assessed via a recognition test. Trait anxiety predicted biased interpretation of social threat vignettes under high working memory load, but not under low working memory load. In Study 2, 53 undergraduates completed a dot probe task with fear-conditioned Japanese characters serving as threat stimuli. Trait anxiety predicted attentional bias to the threat stimuli but, again, this only occurred under high working memory load. Interestingly however, actual eye movements toward the threat stimuli were only associated with state anxiety, and this was not moderated by working memory load, suggesting that executive control regulates biased threat-processing downstream of initial input processes such as orienting. These results suggest that cognitive loads can exacerbate trait anxiety-related cognitive biases, and therefore represent a useful tool for assessing cognitive biases in future research. More importantly, since biased threat-processing has been implicated in the etiology and maintenance of anxiety, poor executive control may be a risk factor for anxiety disorders. |
Kate Burleson-Lesser; Flaviano Morone; Paul DeGuzman; Lucas C. Parra; Hernán A. Makse Collective behaviour in video viewing: A thermodynamic analysis of gaze position Journal Article In: PLoS ONE, vol. 12, no. 1, pp. e0168995, 2017. @article{BurlesonLesser2017, Videos and commercials produced for large audiences can elicit mixed opinions. We wondered whether this diversity is also reflected in the way individuals watch the videos. To answer this question, we presented 65 commercials with high production value to 25 individuals while recording their eye movements, and asked them to provide preference ratings for each video. We find that gaze positions for the most popular videos are highly correlated. To explain the correlations of eye movements, we model them as ªinteractionsº between individuals. A thermodynamic analysis of these interactions shows that they approach a ªcritical º point such that any stronger interaction would put all viewers into lock-step and any weaker interaction would fully randomise patterns. At this critical point, groups with similar collective behaviour in viewing patterns emerge while maintaining diversity between groups. Our results suggest that popularity of videos is already evident in the way we look at them, and that we maintain diversity in viewing behaviour even as distinct patterns of groups emerge. Our results can be used to predict popularity of videos and commercials at the population level from the collective behaviour of the eye movements of a few viewers. |
David Buttelmann; Andy Schieler; Nicole Wetzel; Andreas Widmann Infants' and adults' looking behavior does not indicate perceptual distraction for constrained modelled actions − An eye-tracking study Journal Article In: Infant Behavior and Development, vol. 47, pp. 103–111, 2017. @article{Buttelmann2017, When observing a novel action, infants pay attention to the model's constraints when deciding whether to imitate this action or not. Gergely et al. (2002) found that more 14-month-olds copied a model's use of her head to operate a lamp when she used her head while her hands were free than when she had to use this means because it was the only means available to her (i.e., her hands were occupied). The perceptional distraction account (Beisert et al., 2012) claims that differences between conditions in terms of the amount of attention infants paid to the modeled action caused the differences in infants' performance between conditions. In order to investigate this assumption we presented 14-month-olds (N = 34) with an eye-tracking paradigm and analyzed their looking behavior when observing the head-touch demonstration in the two original conditions. Subsequently, they had the chance to operate the apparatus themselves, and we measured their imitative responses. In order to explore the perceptional processes taking place in this paradigm in adulthood, we also presented adults (N = 31) with the same task. Apart from the fact that we did not replicate the findings in imitation with our participants, the eye-tracking results do not support the perceptional distraction account: infants did not statistically differ − not even tendentially − in their amount of looking at the modeled action in both conditions. Adults also did not statistically differ in their looking at the relevant action components. However, both groups predominantly observed the relevant head action. Consequently, infants and adults do not seem to attend differently to constrained and unconstrained modelled actions. |
Laura Cacciamani; Erica Wager; Mary A. Peterson; Paige E. Scalf Age-related changes in perirhinal cortex sensitivity to configuration and part familiarity and connectivity to visual cortex Journal Article In: Frontiers in Aging Neuroscience, vol. 9, pp. 291, 2017. @article{Cacciamani2017, The perirhinal cortex (PRC) is a medial temporal lobe (MTL) structure known to be involved in assessing whether an object is familiar (i.e., meaningful) or novel. Recent evidence shows that the PRC is sensitive to the familiarity of both whole object configurations and their parts, and suggests the PRC may modulate part familiarity responses in V2. Here, using functional magnetic resonance imaging (fMRI), we investigated age-related decline in the PRC's sensitivity to part/configuration familiarity and assessed its functional connectivity to visual cortex in young and older adults. Participants categorized peripherally presented silhouettes as familiar ("real-world") or novel. Part/configuration familiarity was manipulated via three silhouette configurations: Familiar (parts/configurations familiar), Control Novel (parts/configurations novel), and Part-Rearranged Novel (parts familiar, configurations novel). "Real-world" judgments were less accurate than "novel" judgments, although accuracy did not differ between age groups. The fMRI data revealed differential neural activity, however: In young adults, a linear pattern of activation was observed in left hemisphere (LH) PRC, with Familiar > Control Novel > Part-Rearranged Novel. Older adults did not show this pattern, indicating age-related decline in the PRC's sensitivity to part/configuration familiarity. A functional connectivity analysis revealed a significant coupling between the PRC and V2 in the LH in young adults only. Older adults showed a linear pattern of activation in the temporopolar cortex (TPC), but no evidence of TPC-V2 connectivity. This is the first study to demonstrate age-related decline in the PRC's representations of part/configuration familiarity and its covariance with visual cortex. |
James E. Cane; Heather J. Ferguson; Ian A. Apperly Using perspective to resolve reference: The impact of cognitive load and motivation Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 4, pp. 591–610, 2017. @article{Cane2017, Research has demonstrated a link between perspective taking and working memory. Here we used eye tracking to examine the time course with which working memory load (WML) influences perspective-taking ability in a referential communication task and how motivation to take another's perspective modulates these effects. In Experiment 1, where there was no reward or time pressure, listeners only showed evidence of incorporating perspective knowledge during integration of the target object but did not anticipate reference to this common ground object during the pretarget-noun period. WML did not affect this perspective use. In Experiment 2, where a reward for speed and accuracy was applied, listeners used perspective cues to disambiguate the target object from the competitor object from the earliest moments of processing (i.e., during the pretarget-noun period), but only under low load. Under high load, responses were comparable with the control condition, where both objects were in common ground. Furthermore, attempts to initiate perspective-relevant responses under high load led to impaired recall on the concurrent WML task, indicating that perspective-relevant responses were drawing on limited cognitive resources. These results show that when there is ambiguity, perspective cues guide rapid referential interpretation when there is sufficient motivation and sufficient cognitive resources. |
Etzel Cardeña; Barbara Nordhjem; David Marcusson-Clavertz; Kenneth Holmqvist The "hypnotic state" and eye movements: Less there than meets the eye? Journal Article In: PLoS ONE, vol. 12, no. 8, pp. e0182546, 2017. @article{Cardena2017, Responsiveness to hypnotic procedures has been related to unusual eye behaviors for centuries. Kallio and collaborators claimed recently that they had found a reliable index for "the hypnotic state" through eye-tracking methods. Whether or not hypnotic responding involves a special state of consciousness has been part of a contentious debate in the field, so the potential validity of their claim would constitute a landmark. However, their conclusion was based on 1 highly hypnotizable individual compared with 14 controls who were not measured on hypnotizability. We sought t o replicate their results with a sample screened for High (n = 16) or Low (n = 13) hypnotizability. We used a factorial 2 (high vs. low hypnotizability) x 2 (hypnosis vs. resting conditions) counterbalanced order design with these eye-tracking tasks: Fixation, Saccade, Optokinetic nystagmus (OKN), Smooth pursuit, and Antisaccade (the first three tasks has been used in Kallio et al.'s experiment). Highs reported being more deeply in hypnosis than Lows but only in the hypnotic condition, as expected. There were no significant main or interaction effects for the Fixation, OKN, or Smooth pursuit tasks. For the Saccade task both Highs and Lows had smaller saccades during hypnosis, and in the Antisaccade task both groups had slower Antisaccades during hypnosis. Although a couple of results suggest that a hypnotic condition may produce reduced eye motility, the lack of significant interactions (e.g., showing only Highs expressing a particular eye behavior during hypnosis) does not support the claim that eye behaviors (at least as measured with the techniques used) are an indicator of a "hypnotic state.” Our results do not preclude the possibility that in a more spontaneous or different setting the experience of being hypnotized might relate to specific eye behaviors. |
Christophe Carlei; David Framorando; Nicolas Burra; Dirk Kerzel Face processing is enhanced in the left and upper visual hemi-fields Journal Article In: Visual Cognition, vol. 25, no. 7-8, pp. 749–761, 2017. @article{Carlei2017, Asymmetries in face processing -2 -Abstract We tested whether two known hemi-field asymmetries would affect visual search with face stimuli. Holistic processing of spatial configurations is better in the left hemi-field, reflecting a right hemisphere specialization, and object recognition is better in the upper visual field, reflecting stronger projections into the ventral stream. Faces tap into holistic processing and object recognition at the same time, which predicts better performance in the left and upper hemi-field, respectively. In the first experiment, participants had to detect a face with a gaze direction different from the remaining faces. Participants were faster to respond when targets were presented in the left and upper hemi-field. The same pattern of results was observed when only the eye region was presented. In the second experiment, we turned the faces upside-down, which eliminated the typical spatial configuration of faces. The left hemi-field advantage disappeared, showing that it is related to holistic processing of faces, whereas the upper hemi-field advantage related to object recognition persisted. Finally, we made the search task easier by asking observers to search for a face with open among closed eyes or vice versa. The easy search task eliminated the need for complex object recognition and accordingly, the advantage of the upper visual field disappeared. Similarly, the left hemi-field advantage was attenuated. In sum, our findings show that both horizontal and vertical asymmetries affect search for faces and can be selectively suppressed by changing characteristics of the stimuli. |
Nathan Caruana; Peter Lissa; Genevieve McArthur Beliefs about human agency influence the neural processing of gaze during joint attention Journal Article In: Social Neuroscience, vol. 12, no. 2, pp. 194–206, 2017. @article{Caruana2017b, The current study measured adults' P350 and N170 ERPs while they interacted with a character in a virtual reality paradigm. Some participants believed the character was controlled by a human ("avatar" condition |
Nathan Caruana; Genevieve McArthur; Alexandra Woolgar; Jon Brock Detecting communicative intent in a computerised test of joint attention Journal Article In: PeerJ, vol. 5, pp. 1–16, 2017. @article{Caruana2017, The successful navigation of social interactions depends on a range of cognitive faculties—including the ability to achieve joint attention with others to share information and experiences. We investigated the influence that intention monitoring processes have on gaze-following response times during joint attention. We employed a virtual reality task in which 16 healthy adults engaged in a collaborative game with a virtual partner to locate a target in a visual array. In the Search task, the virtual partner was programmed to engage in non-communicative gaze shifts in search of the target, establish eye contact, and then display a communicative gaze shift to guide the participant to the target. In the NoSearch task, the virtual partner simply established eye contact and then made a single communicative gaze shift towards the target (i.e., there were no non-communicative gaze shifts in search of the target). Thus, only the Search task required participants to monitor their partner's communicative intent before responding to joint attention bids. We found that gaze following was significantly slower in the Search task than the NoSearch task. However, the same effect on response times was not observed when participants completed non-social control versions of the Search and NoSearch tasks, in which the avatar's gaze was replaced by arrow cues. These data demonstrate that the intention monitoring processes involved in differentiating communicative and non-communicative gaze shifts during the Search task had a measurable influence on subsequent joint attention behaviour. The empirical and methodological implications of these findings for the fields of autism and social neuroscience will be discussed. |
Nathan Caruana; Dean Spirou; Jon Brock Human agency beliefs influence behaviour during virtual social interactions Journal Article In: PeerJ, vol. 5, pp. 1–18, 2017. @article{Caruana2017a, In recent years, with the emergence of relatively inexpensive and accessible virtual reality technologies, it is now possible to deliver compelling and realistic simulations of human-to-human interaction. Neuroimaging studies have shown that, when participants believe they are interacting via a virtual interface with another human agent, they show different patterns of brain activity compared to when they know that their virtual partner is computer-controlled. The suggestion is that users adopt an ‘‘intentional stance'' by attributing mental states to their virtual partner. However, it remains unclear how beliefs in the agency of a virtual partner influence participants' behaviour and subjective experience ofthe interaction. We investigated this issue in the context ofa cooperative ‘‘joint attention'' game in which participants interacted via an eye tracker with a virtual onscreen partner, directing each other's eye gaze to different screen locations. Half of the participants were correctly informed that their partner was controlled by a computer algorithm (‘‘Computer'' condition). The other halfwere misled into believing that the virtual character was controlled by a second participant in another room (‘‘Human'' condition). Those in the ‘‘Human'' condition were slower to make eye contact with their partner and more likely to try and guide their partner before theyhad established mutual eye contact than participants in the ‘‘Computer'' condition. They also responded more rapidly when their partner was guiding them, although the same effect was also found for a control condition in which they responded to an arrow cue. Results confirm the influence ofhuman agency beliefs on behaviour in this virtual social interaction context. They further suggest that researchers and developers attempting to simulate social interactions should consider the impact of agency beliefs on user experience in other social contexts, and their effect on the achievement of the application's goals. |
Filip Děchtěrenko; Jiří Lukavský; Kenneth Holmqvist Flipping the stimulus: Effects on scanpath coherence? Journal Article In: Behavior Research Methods, vol. 49, no. 1, pp. 382–393, 2017. @article{Dechterenko2017, In experiments investigating dynamic tasks, it is often useful to examine eye movement scan patterns. We can present trials repeatedly and compute within-subjects/conditions similarity in order to distinguish between signal and noise in gaze data. To avoid obvious repetitions of trials, filler trials must be added to the experimental protocol, resulting in long experiments. Alternatively, trials can be modified to reduce the chances that the participant will notice the repetition, while avoiding significant changes in the scan patterns. In tasks in which the stimuli can be geometrically transformed without any loss of meaning, flipping the stimuli around either of the axes represents a candidate modification. In this study, we examined whether flipping of stimulus object trajectories around the x-and y-axes resulted in comparable scan patterns in a multiple object tracking task. We developed two new strategies for the statistical comparison of similarity between two groups of scan patterns, and then tested those strategies on artificial data. Our results suggest that although the scan pat- terns in flipped trials differ significantly from those in the original trials, this difference is small (as little as a 13 % in- crease of overall distance). Therefore, researchers could use geometric transformations to test more complex hypotheses regarding scan pattern coherence while retaining the same duration for experiments. |
Sergio Delle Monache; Francesco Lacquaniti; Gianfranco Bosco In: Journal of Neurophysiology, vol. 118, no. 3, pp. 1809–1823, 2017. @article{DelleMonache2017, The ability to catch objects when tran- siently occluded from view suggests their motion can be extrapolated. Intraparietal cortex (IPS) plays a major role in this process along with other brain structures, depending on the task. For example, intercep- tion of objects under Earth's gravity effects may depend on time-to-contact predictions derived from integration of visual signals processed by hMT/V5⫹ with a priori knowledge of gravity residing in the temporoparietal junction (TPJ). To investigate this issue further, we disrupted TPJ, hMT/V5⫹, and IPS activities with transcranial magnetic stimulation (TMS) while subjects intercepted computer- simulated projectile trajectories perturbed randomly with either hypo- or hypergravity effects. In experiment 1, trajectories were occluded either 750 or 1,250 ms before landing. Three subject groups underwent triple-pulse TMS (tpTMS, 3 pulses at 10 Hz) on one target area (TPJ | hMT/V5⫹ | IPS) and on the vertex (control site), timed at either trajectory perturbation or occlusion. In experiment 2, trajectories were entirely visible and participants received tpTMS on TPJ and hMT/ V5+ with same timing as experiment 1. tpTMS of TPJ, hMT/V5⫹, and IPS affected differently the interceptive timing. TPJ stimulation affected preferentially responses to 1-g motion, hMT/V5+ all response types, and IPS stimulation induced opposite effects on 0-g and 2-g responses, being ineffective on 1-g responses. Only IPS stimulation was effective when applied after target disappearance, implying this area might elaborate memory representations of occluded target motion. Results are compatible with the idea that IPS, TPJ, and hMT/V5+ contribute to distinct aspects of visual motion extrapolation, perhaps through parallel processing. |
H. Devillez; Anne Guérin-Dugué; N. Guyader How a distractor influences fixations during the exploration of natural scenes Journal Article In: Journal of Eye Movement Research, vol. 10, no. 2, pp. 1–13, 2017. @article{Devillez2017, The distractor effect is a well-established means of studying different aspects of fixation pro-gramming during the exploration of visual scenes. In this study, we present a task-irrelevant distractor to participants during the free exploration of natural scenes. We investigate the con-trol and programming of fixations by analyzing fixation durations and locations, and the link between the two. We also propose a simple mixture model evaluated using the Expectation-Maximization algorithm to test the distractor effect on fixation locations, including fixations which did not land on the distractor. The model allows us to quantify the influence of a visual distractor on fixation location relative to scene saliency for all fixations, at distractor onset and during all subsequent exploration. The distractor effect is not just limited to the current fixa-tion, it continues to influence fixations during subsequent exploration. An abrupt change in the stimulus not only increases the duration of the current fixation, it also influences the location of the fixation which occurs immediately afterwards and to some extent, in function of the length of the change, the duration and location of any subsequent fixations. Overall, results from the eye movement analysis and the statistical model suggest that fixation durations and locations are both controlled by direct and indirect mechanisms. |
Christel Devue; Gina M. Grimshaw Faces are special, but facial expressions aren't: Insights from an oculomotor capture paradigm Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 5, pp. 1438–1452, 2017. @article{Devue2017, We compared the ability of angry and neutral faces to drive oculomotor behaviour as a test of the widespread claim that emotional information is automatically prioritized when competing for attention. Participants were required to make a saccade to a colour singleton; photos of angry or neutral faces appeared amongst other objects within the array, and were completely irrelevant for the task. Eye-tracking mea- sures indicate that faces drive oculomotor behaviour in a bottom-up fashion; however, angry faces are no more likely to capture the eyes than neutral faces are. Saccade latencies suggest that capture occurrs via reflexive saccades and that the outcome of competition between salient items (colour single- tons and faces) may be subject to fluctuations in attentional control. Indeed, although angry and neutral faces captured the eyes reflexively on a portion of trials, participants successfully maintained goal-relevant oculomotor behaviour on a majority of trials.We outline potential cognitive and brain mechanisms underlying oculomotor capture by faces. |
Nicholas K. DeWind; Jiyun Peng; Andrew Luo; Elizabeth M. Brannon; Michael L. Platt Pharmacological inactivation does not support a unique causal role for intraparietal sulcus in the discrimination of visual number Journal Article In: PLoS ONE, vol. 12, no. 12, pp. e0188820, 2017. @article{DeWind2017, The "number sense" describes the intuitive ability to quantify without counting. Single neu-ron recordings in non-human primates and functional imaging in humans suggest the intra-parietal sulcus is an important neuroanatomical locus of numerical estimation. Other lines of inquiry implicate the IPS in numerous other functions, including attention and decision mak-ing. Here we provide a direct test of whether IPS has functional specificity for numerosity judgments. We used muscimol to reversibly and independently inactivate the ventral and lat-eral intraparietal areas in two monkeys performing a numerical discrimination task and a color discrimination task, roughly equilibrated for difficulty. Inactivation of either area caused parallel impairments in both tasks and no evidence of a selective deficit in numerical pro-cessing. These findings do not support a causal role for the IPS in numerical discrimination, except insofar as it also has a role in the discrimination of color. We discuss our findings in light of several alternative hypotheses of IPS function, including a role in orienting responses, a general cognitive role in attention and decision making processes and a more specific role in ordinal comparison that encompasses both number and color judgments. |
Nathaniel T. Diede; Julie M. Bugg Cognitive effort is modulated outside of the explicit awareness of conflict frequency: Evidence from pupillometry Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 43, no. 5, pp. 824–835, 2017. @article{Diede2017, Classic theories of cognitive control conceptualized controlled processes as slow, strategic, and willful, with automatic processes being fast and effortless. The context-specific proportion compatibility (CSPC) effect, the reduction in the compatibility effect in a context (e.g., location) associated with a high relative to low likelihood of conflict, challenged classic theories by demonstrating fast and flexible control that appears to operate outside of conscious awareness. Two theoretical questions yet to be addressed are whether the CSPC effect is accompanied by context-dependent variation in effort, and whether the exertion of effort depends on explicit awareness of context-specific task demands. To address these questions, pupil diameter was measured during a CSPC paradigm. Stimuli were randomly presented in either a mostly compatible location or a mostly incompatible location. Replicating prior research, the CSPC effect was found. The novel finding was that pupil diameter was greater in the mostly incompatible location compared to the mostly compatible location, despite participants' lack of awareness of context-specific task demands. Additionally, this difference occurred regardless of trial type or a preceding switch in location. These patterns support the view that context (location) dictates selection of optimal attentional settings in the CSPC paradigm, and varying levels of effort and performance accompany these settings. Theoretically, these patterns imply that cognitive control may operate fast, flexibly, and outside of awareness, but not effortlessly. |
Kevin C. Dieter; Jocelyn L. Sy; Randolph Blake Individual differences in sensory eye dominance reflected in the dynamics of binocular rivalry Journal Article In: Vision Research, vol. 141, pp. 40–50, 2017. @article{Dieter2017, Normal binocular vision emerges from the combination of neural signals arising within separate monocular pathways. It is natural to wonder whether both eyes contribute equally to the unified cyclopean impression we ordinarily experience. Binocular rivalry, which occurs when the inputs to the two eyes are markedly different, affords a useful means for quantifying the balance of influence exerted by the eyes (called sensory eye dominance, SED) and for relating that degree of balance to other aspects of binocular visual function. However, the precise ways in which binocular rivalry dynamics change when the eyes are unbalanced remain uncharted. Relying on widespread individual variability in the relative predominance of the two eyes as demonstrated in previous studies, we found that an observer's overall tendency to see one eye more than the other was driven both by differences in the relative duration and frequency of instances of that eye's perceptual dominance. Specifically, larger imbalances between the eyes were associated with longer and more frequent periods of exclusive dominance for the stronger eye. Increases in occurrences of dominant eye percepts were mediated in part by a tendency to experience “return transitions” to the predominant eye – that is, observers often experienced sequential exclusive percepts of the dominant eye's image with an intervening mixed percept. Together, these results indicate that the often-observed imbalances between the eyes during binocular rivalry reflect true differences in sensory processing, a finding that has implications for our understanding of the mechanisms underlying binocular vision in general. |
Barbara Dillenburger; Michael Morgan Saccades to explicit and virtual features in the Poggendorff figure show perceptual biases Journal Article In: i-Perception, vol. 8, no. 2, pp. 1–21, 2017. @article{Dillenburger2017, Human participants made saccadic eye movements to various features in a modified vertical Poggendorff figure, to measure errors in the location of key geometrical features. In one task, subjects (n ¼ 8) made saccades to the vertex of the oblique T-intersection between a diagonal pointer and a vertical line. Results showed both a small tendency to shift the saccade toward the interior of the angle, and a larger bias in the direction of a shorter saccade path to the landing line. In a different kind of task (visual extrapolation), the same subjects fixated the tip of a 45 pointer and made a saccade to the implicit point of intersection between pointer and a distant vertical line. Results showed large errors in the saccade landing positions and the saccade polar angle, in the direction predicted from the perceptual Poggendorff bias. Further experiments manipulated the position of the fixation point relative to the implicit target, such that the Poggendorff bias would be in the opposite direction from a bias toward taking the shortest path to the landing line. The bias was still significant. We conclude that the Poggendorff bias in eye movements is in part due to the mislocation of visible target features but also to biases in planning a saccade to a virtual target across a gap. The latter kind of error comprises both a tendency to take the shortest path to the landing line, and a perceptual error that overestimates the vector component orthogonal to the gap. |
Nicholas E. DiQuattro; Joy J. Geng Presaccadic target competition attenuates distraction Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 4, pp. 1087–1096, 2017. @article{DiQuattro2017, Although it is well known that salient nontargets can capture attention despite being task irrelevant, several studies have reported short fixation dwell times, suggesting the presence of an attentional mechanism to Brapidly reject^ dissimilar distractors. Rapid rejection has been hypothesized to depend on the strong mismatch between distractor features and the target template, but it is unknown whether the pres- ence of strong feature mismatch is sufficient, or if the presence of a target at a competing location is also necessary. Here, we investigated this question by first replicating the finding of rapid rejection for dissimilar distractors in the presence of a concurrent target (Experiment 1); manipulating the onset of the target stimulus relative to the distractor (Experiment 2); and using a saccade-contingent display to delay the target onset until after the first saccade was initiated. The results demonstrate that the speed of distractor rejection depends on the presence of target competition prior to the initiation of the first saccade, and not after the saccade. This suggests that stimulus competition for covert attention sets a Bsaccade pri- ority map^ that unfolds over time, resulting in faster corrective saccades to an anticipated object with higher top-down attentional priority. |
Marjorie Dole; David Méary; Olivier Pascalis Modifications of visual field asymmetries for face categorization in early deaf adults: A study with chimeric faces Journal Article In: Frontiers in Psychology, vol. 8, pp. 30, 2017. @article{Dole2017, Right hemisphere lateralization for face processing is well documented in typical populations. At the behavioral level, this right hemisphere bias is often related to a left visual field (LVF) bias. A conventional mean to study this phenomenon consists of using chimeric faces that are composed of the left and right parts of two faces. In this paradigm, participants generally use the left part of the chimeric face, mostly processed through the right optic tract, to determine its identity, gender or age. To assess the impact of early auditory deprivation on face processing abilities, we tested the LVF bias in a group of early deaf participants and hearing controls. In two experiments, deaf and hearing participants performed a gender categorization task with chimeric and normal average faces. Over the two experiments the results confirmed the presence of a LVF bias in participants, which was less frequent in deaf participants. This result suggested modifications of hemispheric lateralization for face processing in deaf participants. In Experiment 2 we also recorded eye movements to examine whether the LVF bias could be related to face scanning behavior. In this second study, participants performed a similar task while we recorded eye movements using an eye tracking system. Using areas of interest analysis we observed that the proportion of fixations on the mouth relatively to the other areas was increased in deaf participants in comparison with the hearing group. This was associated with a decrease of the proportion of fixations on the eyes. In addition these measures were correlated to the LVF bias suggesting a relationship between the LVF bias and the patterns of facial exploration. Taken together, these results suggest that early auditory deprivation results in plasticity phenomenon affecting the perception of static faces through modifications of hemispheric lateralization and of gaze behavior. |
Ewa Domaradzka; Maksymilian Bielecki Deadly attraction - attentional bias toward preferred cigarette brand in smokers Journal Article In: Frontiers in Psychology, vol. 8, pp. 1365, 2017. @article{Domaradzka2017, Numerous studies have shown that biases in visual attention might be evoked by affective and personally relevant stimuli, for example addiction-related objects. Despite the fact that addiction is often linked to specific products and systematic purchase behaviors, no studies focused directly on the existence of bias evoked by brands. Smokers are characterized by high levels of brand loyalty and everyday contact with cigarette packaging. Using the incentive-salience mechanism as a theoretical framework, we hypothesized that this group might exhibit a bias toward the preferred cigarette brand. In our study, a group of smokers (N = 40) performed a dot probe task while their eye movements were recorded. In every trial a pair of pictures was presented – each of them showed a single cigarette pack. The visual properties of stimuli were carefully controlled, so branding information was the key factor affecting subjects' reactions. For each participant, we compared gaze behavior related to the preferred vs. other brands. The analyses revealed no attentional bias in the early, orienting phase of the stimulus processing and strong differences in maintenance and disengagement. Participants spent more time looking at the preferred cigarettes and saccades starting at the preferred brand location had longer latencies. In sum, our data shows that attentional bias toward brands might be found in situations not involving choice or decision making. These results provide important insights into the mechanisms of formation and maintenance of attentiona l biases to stimuli of personal relevance and might serve as a first step toward developing new attitude measurement techniques. |
Albert End; Matthias Gamer Preferential processing of social features and their interplay with physical saliency in complex naturalistic scenes Journal Article In: Frontiers in Psychology, vol. 8, pp. 418, 2017. @article{End2017, According to so-called saliency-based attention models, attention during free viewing of visual scenes is particularly allocated to physically salient image regions. In the present study, we assumed that social features in complex naturalistic scenes would be processed preferentially irrespective of their physical saliency. Therefore, we expected worse prediction of gazing behavior by saliency-based attention models when social information is present in the visual field. To test this hypothesis, participants freely viewed color photographs of complex naturalistic social (e.g., including heads, bodies) and non-social (e.g., including landscapes, objects) scenes while their eye movements were recorded. In agreement with our hypothesis, we found that social features (especially heads) were heavily prioritized during visual exploration. Correspondingly, the presence of social information weakened the influence of low-level saliency on gazing behavior. Importantly, this pattern was most pronounced for the earliest fixations indicating automatic attentional processes. These findings were further corroborated by a linear mixed model approach showing that social features (especially heads) add substantially to the prediction of fixations beyond physical saliency. Taken together, the current study indicates gazing behavior for naturalistic scenes to be better predicted by the interplay of social and physically salient features than by low-level saliency alone. These findings strongly challenge the generalizability of saliency-based attention models and demonstrate the importance of considering social influences when investigating the driving factors of human visual attention. |
Vivian Eng; Alfred Lim; Simon Kwon; Su Ren Gan; S. Azrin Jamaluddin; Steve M. J. Janssen; Jason Satel Stimulus-response incompatibility eliminates inhibitory cueing effects with saccadic but not manual responses Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 4, pp. 1097–1106, 2017. @article{Eng2017, There are thought to be two forms of inhibition of return (IOR) depending on whether the oculomotor system is activated or suppressed. When saccades are allowed, output-based IOR is generated, whereas input-based IOR arises when saccades are prohibited. In a series of 4 experiments, we mixed or blocked compatible and incompatible trials with saccadic or manual responses to investigate whether cueing effects would follow the same pattern as those observed with more traditional peripheral onsets and central arrows. In all experiments, an uninformative cue was displayed, followed by a cue-back stimulus that was either red or green, indicating whether a compatible or incompatible response was required. The results showed that IOR was indeed observed for compatible responses in all tasks, whereas IOR was eliminated for incompatible trials-but only with saccadic responses. These findings indicate that the dissociation between input- and output-based forms of IOR depends on more than just oculomotor activation, providing further support for the existence of an inhibitory cueing effect that is distinct to the manual response modality. |
Michael A. Eskenazi; Jocelyn R. Folk Regressions during reading: The cost depends on the cause Journal Article In: Psychonomic Bulletin & Review, vol. 24, no. 4, pp. 1211–1216, 2017. @article{Eskenazi2017, The direction and duration of eye movements during reading is predominantly determined by cognitive and linguistic processing, but some low-level oculomotor effects also influence the duration and direction of eye movements. One such effect is inhibition of return (IOR), which results in an increased latency to return attention to a target that has been previously attended (Posner & Cohen, Attention and Performance X: Control of Language Processes, 32, 531– 556, 1984). Although this is a low level effect, it has also been found in the complex task of reading (Henderson & Luke, Psychonomic Bulletin & Review, 19(6), 1101–1107, 2012; Rayner, Juhasz, Ashby, & Clifton, Vision Research, 43(9), 1027–1034, 2003). The purpose of the current study was to isolate the potentially different causes ofregressive eye movements: to adjust for oculomotor error and to assist with comprehension difficulties. We found that readers demonstrated an IOR effect when regressions were caused by oculomotor error, but not when regressions were caused by comprehension difficulties. The results suggest that IOR is primarily associated with low-level oculomotor control of eye movements, and that regressive eye movements that are controlled by comprehension processes are not subject to IOR effects. The results have implications for understanding the relationship between oculomotor and cognitive control ofeye movements and for models ofeye movement control. |
Alejandro J. Estudillo; Markus Bindemann Can gaze-contingent mirror-feedback from unfamiliar faces alter self-recognition? Journal Article In: Quarterly Journal of Experimental Psychology, vol. 70, no. 5, pp. 944–958, 2017. @article{Estudillo2017, This study focuses on learning of the self, by examining how human observers update internal representations of their own face. For this purpose, we present a novel gaze-contingent paradigm, in which an onscreen face either mimics observers' own eye-gaze behaviour (in the congruent condition), moves its eyes in different directions to that of the observers (incongruent condition), or remains static and unresponsive (neutral condition). Across three experiments, the mimicry of the onscreen face did not affect observers' perceptual self-representations. However, this paradigm influenced observers' reports of their own face. This effect was such that observers felt the onscreen face to be their own and that, if the onscreen gaze had moved on its own accord, observers expected their own eyes to move too. The theoretical implications of these findings are discussed. |
Ulrich Ettinger; Eliana Faiola; Anna-Maria Kasparbauer; Nadine Petrovsky; Raymond C. K. Chan; Roman Liepelt; Veena Kumari Effects of nicotine on response inhibition and interference control Journal Article In: Psychopharmacology, vol. 234, no. 7, pp. 1093–1111, 2017. @article{Ettinger2017, Nicotine is a cholinergic agonist with known pro-cognitive effects in the domains of alerting and orienting attention. However, its effects on attentional top-down functions such as response inhibition and interference control are less well characterised. Here, we investigated the effects of 7 mg transdermal nicotine on performance on a battery of response inhibition and interference control tasks. A sample of N = 44 healthy adult non-smokers performed antisaccade, stop signal, Stroop, go/no-go, flanker, shape matching and Simon tasks, as well as the attentional network test (ANT) and a continuous performance task (CPT). Nicotine was administered in a within-subjects, double-blind, placebo-controlled design, with order of drug administration counterbalanced. Relative to placebo, nicotine led to significantly shorter reaction times on a prosaccade task and on CPT hits but did not significantly improve inhibitory or interference control performance on any task. Instead, nicotine had a negative influence in increasing the interference effect on the Simon task. Nicotine did not alter inter-individual associations between reaction times on congruent trials and error rates on incongruent trials on any task. Finally, there were effects involving order of drug administration, suggesting practice effects but also beneficial nicotine effects when the compound was administered first. Overall, our findings support previous studies showing positive effects of nicotine on basic attentional functions but do not provide direct evidence for an improvement of top-down cognitive control through acute administration of nicotine at this dose in healthy non-smokers. |
Jonas Everaert; Ivan Grahek; Wouter Duyck; Jana Buelens; Nathan Van den Bergh; Ernst H. W. Koster Mapping the interplay among cognitive biases, emotion regulation, and depressive symptoms Journal Article In: Cognition and Emotion, vol. 31, no. 4, pp. 726–735, 2017. @article{Everaert2017a, Cognitive biases and emotion regulation (ER) difficulties have been instrumental in understanding hallmark features of depression. However, little is known about the interplay among these important risk factors to depression. This cross-sectional study investigated how multiple cognitive biases modulate the habitual use of ER processes and how ER habits subsequently regulate depressive symptoms. All participants first executed a computerised version of the scrambled sentences test (interpretation bias measure) while their eye movements were registered (attention bias measure) and then completed questionnaires assessing positive reappraisal, brooding, and depressive symptoms. Path and bootstrapping analyses supported both direct effects of cognitive biases on depressive symptoms and indirect effects via the use of brooding and via the use of reappraisal that was in turn related to the use of brooding. These findings help to formulate a better understanding of how cognitive biases and ER habits interact to maintain depressive symptoms. |
Jonas Everaert; Ivan Grahek; Ernst H. W. Koster Individual differences in cognitive control over emotional material modulate cognitive biases linked to depressive symptoms Journal Article In: Cognition and Emotion, vol. 31, no. 4, pp. 736–746, 2017. @article{Everaert2017, Deficient cognitive control over emotional material and cognitive biases are important mechanisms underlying depression, but the interplay between these emotionally distorted cognitive processes in relation to depressive symptoms is not well understood. This study investigated the relations among deficient cognitive control of emotional information (i.e. inhibition, shifting, and updating difficulties), cognitive biases (i.e. negative attention and interpretation biases), and depressive symptoms. Theory-driven indirect effect models were constructed, hypothesising that deficient cognitive control over emotional material predicts depressive symptoms through negative attention and interpretation biases. Bootstrapping analyses demonstrated that deficient inhibitory control over negative material was related to negative attention bias which in turn predicted a congruent bias in interpretation and subsequently depressive symptoms. Both shifting and updating impairments in response to negative material had an indirect effect on depression severity through negative interpretation bias. No evidence was found for direct effects of deficient cognitive control over emotional material on depressive symptoms. These findings may help to formulate an integrated understanding of the cognitive foundations of depressive symptoms. |
Laura Fademrecht; Isabelle Bülthoff; Stephan Rosa Action recognition is viewpoint-dependent in the visual periphery Journal Article In: Vision Research, vol. 135, pp. 10–15, 2017. @article{Fademrecht2017a, Recognizing actions of others across the whole visual field is required for social interactions. In a previous study, we have shown that recognition is very good even when life-size avatars who were facing the observer carried out actions (e.g. waving) and were presented very far away from the fovea (Fademrecht, Bülthoff, & de la Rosa, 2016). We explored the possibility whether this remarkable performance was owed to life-size avatars facing the observer, which – according to some social cognitive theories (e.g. Schilbach et al., 2013) – could potentially activate different social perceptual processes as profile facing avatars. Participants therefore viewed a life-size stick figure avatar that carried out motion-captured social actions (greeting actions: handshake, hugging, waving; attacking actions: slapping, punching and kicking) in frontal and profile view. Participants' task was to identify the actions as ‘greeting' or as ‘attack' or to assess the emotional valence of the actions. While recognition accuracy for frontal and profile views did not differ, reaction times were significantly faster in general for profile views (i.e. the moving avatar was seen profile on) than for frontal views (i.e. the action was directed toward the observer). Our results suggest that the remarkable well action recognition performance in the visual periphery was not owed to a more socially engaging front facing view. Although action recognition seems to depend on viewpoint, action recognition in general remains remarkable accurate even far into the visual periphery. |
Laura Fademrecht; Judith Nieuwenhuis; Isabelle Bülthoff; Nick Barraclough; Stephan Rosa Action recognition in a crowded environment Journal Article In: i-Perception, pp. 1–19, 2017. @article{Fademrecht2017, So far, action recognition has been mainly examined with small point-light human stimuli presented alone within a narrow central area of the observer's visual field. Yet, we need to recognize the actions of life-size humans viewed alone or surrounded by bystanders, whether they are seen in central or peripheral vision. Here, we examined the mechanisms in central vision and far periphery (40? eccentricity) involved in the recognition of the actions of a life-size actor (target) and their sensitivity to the presence of a crowd surrounding the target. In Experiment 1, we used an action adaptation paradigm to probe whether static or idly moving crowds might interfere with the recognition of a target's action (hug or clap). We found that this type of crowds whose movements were dissimilar to the target action hardly affected action recognition in central and peripheral vision. In Experiment 2, we examined whether crowd actions that were more similar to the target actions affected action recognition. Indeed, the presence of that crowd diminished adaptation aftereffects in central vision as wells as in the periphery. We replicated Experiment 2 using a recognition task instead of an adaptation paradigm. With this task, we found evidence of decreased action recognition accuracy, but this was significant in peripheral vision only. Our results suggest that the presence of a crowd carrying out actions similar to that of the target affects its recognition. We outline how these results can be understood in terms of high-level crowding effects that operate on action-sensitive perceptual channels. |
Sali M. K. Farhan; Robert Bartha; Sandra E. Black; Dale Corbett; Elizabeth Finger; Morris Freedman; Barry Greenberg; David A. Grimes; Robert A. Hegele; Chris Hudson; Peter W. Kleinstiver; Anthony E. Lang; Mario Masellis; William E. McIlroy; Paula M. McLaughlin; Manuel Montero-Odasso; David G. Munoz; Douglas P. Munoz; Stephen Strother; Richard H. Swartz; Sean Symons; Maria Carmela Tartaglia; Lorne Zinman; Michael J. Strong The Ontario Neurodegenerative Disease Research Initiative (ONDRI) Journal Article In: Canadian Journal of Neurological Sciences, vol. 44, no. 2, pp. 196–202, 2017. @article{Farhan2017, Because individuals develop dementia as a manifestation of neurodegenerative or neurovascular disorder, there is a need to develop reliable approaches to their identification. We are undertaking an observational study (Ontario Neurodegenerative Disease Research Initiative [ONDRI]) that includes genomics, neuroimaging, and assessments of cognition as well as language, speech, gait, retinal imaging, and eye tracking. Disorders studied include Alzheimer's disease, amyotrophic lateral sclerosis, frontotemporal dementia, Parkinson's disease, and vascular cognitive impairment. Data from ONDRI will be collected into the Brain-CODE database to facilitate correlative analysis. ONDRI will provide a repertoire of endophenotyped individuals that will be a unique, publicly available resource. |
Heather J. Ferguson; Ian Apperly; James E. Cane Eye tracking reveals the cost of switching between self and other perspectives in a visual perspective-taking task Journal Article In: Quarterly Journal of Experimental Psychology, vol. 70, no. 8, pp. 1646–1660, 2017. @article{Ferguson2017, Previous studies have shown that while people can rapidly and accurately compute their own and other people's visual perspectives, they experience difficulty ignoring the irrelevant perspective when the two perspectives differ. We used the ‘avatar' perspective-taking task to examine the mechanisms that underlie these egocentric (i.e. interference from their own perspective) and altercentric (i.e. interference from the other person's perspective) tendencies. Participants were eye-tracked as they verified the number of discs in a visual scene according to either their own or an on-screen avatar's perspective. Crucially in some trials the two perspectives were inconsistent (i.e. each saw a different number of discs), while in others they were consistent. To examine the effect of perspective switching, performance was compared for trials that were preceded with the same versus different perspective cue. We found that altercentric interference can be reduced or eliminated when participants stick with their own perspective across consecutive trials. Our eye- tracking analyses revealed distinct fixation patterns for self and other perspective-taking, suggesting that consistency effects in this paradigm are driven by implicit mentalising of what others can see, and not automatic directional cues from the avatar. |
Evelyn C. Ferstl; Laura Israel; Lisa Putzar Humor facilitates text comprehension: Evidence from eye movements Journal Article In: Discourse Processes, vol. 54, no. 4, pp. 259–284, 2017. @article{Ferstl2017, One crucial property of verbal jokes is that the punchline usually contains an incongruency that has to be resolved by updating the situation model representation. In the standard pragmatic model, these processes are considered to require cognitive effort. However, only few studies compared jokes to texts requiring a situation model revision without being funny. In the present study participants' eye movements were recorded while they read short texts falling into four categories: jokes, texts that made a revision of the situation model necessary without being funny (revision texts), and two types of control texts. Jokes were read faster and elicited fewer regressive eye movements than the other text categories. Women were more sensitive to revision and inference demands of nonhumorous texts than men, and this was particularly the case when the instructions required a meta-linguistic evaluation. In contrast to the predictions of the two-stage model of pragmatics, humor appreciation facilitated text comprehension, and this effect was more pronounced for men than for women. |
Matthew Heath; Erin M. Shellington; Sam Titheridge; Dawn P. Gill; Robert J. Petrella In: Journal of Alzheimer's Disease, vol. 56, no. 1, pp. 167–183, 2017. @article{Heath2017, Exercise programs involving aerobic and resistance training (i.e., multiple-modality) have shown promise in improving cognition and executive control in older adults at risk, or experiencing, cognitive decline. It is, however, unclear whether cognitive training within a multiple-modality program elicits an additive benefit to executive/cognitive processes. This is an important question to resolve in order to identify optimal training programs that delay, or ameliorate, executive deficits in persons at risk for further cognitive decline. In the present study, individuals with a self-reported cognitive complaint (SCC) participated in a 24-week multiple-modality (i.e., the M2 group) exercise intervention program. In addition, a separate group of individuals with a SCC completed the same aerobic and resistance training as the M2 group but also completed a cognitive-based stepping task (i.e., multiple-modality, mind-motor intervention: M4 group). Notably, pre- and post-intervention executive control was examined via the antisaccade task (i.e., eye movement mirror-symmetrical to a target). Antisaccades are an ideal tool for the study of individuals with subtle executive deficits because of its hands- and language-free nature and because the task's neural mechanisms are linked to neuropathology in cognitive decline (i.e., prefrontal cortex). Results showed that M2 and M4 group antisaccade reaction times reliably decreased from pre- to post-intervention and the magnitude of the decrease was consistent across groups. Thus, multi-modality exercise training improved executive performance in persons with a SCC independent of mind-motor training. Accordingly, we propose that multiple-modality training provides a sufficient intervention to improve executive control in persons with a SCC. |
Jens R. Helmert; Claudia Symmank; Sebastian Pannasch; Harald Rohm Have an eye on the buckled cucumber: An eye tracking study on visually suboptimal foods Journal Article In: Food Quality and Preference, vol. 60, pp. 40–47, 2017. @article{Helmert2017, Waste is an ever growing problem in the food supply chain, starting in the production up to the consumers' households. A precondition for a consumer to purchase a product is to recognize it as an option in the first place. Therefore, in the present study, we investigated eye movement behavior on impeccable and visually suboptimal food items in a purchase or discard decision task. Additionally, in some trials price badges of the suboptimal food items were designed specifically in order to attract attention. Design changes included messages regarding price and taste, respectively, either presented in red or green. The results show that the design changes indeed attracted attention towards suboptimal food items in terms of time to first fixation, and also prolonged total fixation duration. However, only color yielded differences between the design variations, with red resulting in longer total fixation durations. Additionally, we inspected choice behavior towards visually suboptimal food items. As can be expected, purchase decisions declined for the suboptimal as compared to the impeccable items. However, when presented with differently designed price badges, a positive trend to purchase the suboptimal items was obtained. Our results show that price badge designs impact attention, cognitive processing, and finally also purchase decisions. Therefore, supplying visually suboptimal food in stores should be embedded into efforts to attract attention towards these products, as selling visually suboptimal food might positively impact waste balance in the food domain. |
Andrea Helo; Sandrien Ommen; Sebastian Pannasch; Lucile Danteny-Dordoigne; Pia Rämä Influence of semantic consistency and perceptual features on visual attention during scene viewing in toddlers Journal Article In: Infant Behavior and Development, vol. 49, pp. 248–266, 2017. @article{Helo2017, Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. |
John M. Henderson; Taylor R. Hayes Meaning-based guidance of attention in scenes as revealed by meaning maps Journal Article In: Nature Human Behaviour, vol. 1, no. 10, pp. 743–747, 2017. @article{Henderson2017, Real-world scenes comprise a blooming, buzzing confusion of information. To manage this complexity, visual attention is guided to important scene regions in real time1–7. What factors guide attention within scenes? A leading theoretical position suggests that visual salience based on semantically uninterpreted image features plays the critical causal role in attentional guidance, with knowledge and meaning playing a secondary or modulatory role8–11. Here we propose instead that meaning plays the dominant role in guiding human attention through scenes. To test this proposal, we developed ‘meaning maps' that represent the semantic richness of scene regions in a format that can be directly compared to image salience. We then contrasted the degree to which the spatial distributions of meaning and salience predict viewers' overt attention within scenes. The results showed that both meaning and salience predicted the distribution of attention, but that when the relationship between meaning and salience was controlled, only meaning accounted for unique variance in attention. This pattern of results was apparent from the very earliest time-point in scene viewing. We conclude that meaning is the driving force guiding attention through real-world scenes. |
Piril Hepsomali; Julie A. Hadwin; Simon P. Liversedge; Matthew Garner Pupillometric and saccadic measures of affective and executive processing in anxiety Journal Article In: Biological Psychology, vol. 127, pp. 173–179, 2017. @article{Hepsomali2017, Anxious individuals report hyper-arousal and sensitivity to environmental stimuli, difficulties concentrating, performing tasks efficiently and inhibiting unwanted thoughts and distraction. We used pupillometry and eye-movement measures to compare high vs. low anxious individuals hyper-reactivity to emotional stimuli (facial expressions) and subsequent attentional biases in a memory-guided pro- and antisaccade task during conditions of low and high cognitive load (short vs. long delay). High anxious individuals produced larger and slower pupillary responses to face stimuli, and more erroneous eye-movements, particularly following long delay. Low anxious individuals' pupillary responses were sensitive to task demand (reduced during short delay), whereas high anxious individuals' were not. These findings provide evidence in anxiety of enhanced, sustained and inflexible patterns of pupil responding during affective stimulus processing and cognitive load that precede deficits in task performance. |
James P. Herman; Richard J. Krauzlis Color-change detection activity in the primate superior colliculus Journal Article In: eNeuro, vol. 4, no. 2, pp. 1–16, 2017. @article{Herman2017, The primate superior colliculus (SC) is a midbrain structure that participates in the control of spatial attention. Previous studies examining the role of the SC in attention have mostly used luminance-based visual features (e.g., motion, contrast) as the stimuli and saccadic eye movements as the behavioral response, both of which are known to modulate the activity of SC neurons. To explore the limits of the SC's involvement in the control of spatial attention, we recorded SC neuronal activity during a task using color, a visual feature dimension not traditionally associated with the SC, and required monkeys to detect threshold-level changes in the saturation of a cued stimulus by releasing a joystick during maintained fixation. Using this color-based spatial attention task, we found substantial cue-related modulation in all categories of visually responsive neurons in the intermediate layers of the SC. Notably, near-threshold changes in color saturation, both increases and decreases, evoked phasic bursts of activity with magnitudes as large as those evoked by stimulus onset. This change-detection activity had two distinctive features: activity for hits was larger than for misses, and the timing of change-detection activity accounted for 67% of joystick release latency, even though it preceded the release by at least 200 ms. We conclude that during attention tasks, SC activity denotes the behavioral relevance of the stimulus regardless of feature dimension and that phasic event-related SC activity is suitable to guide the selection of manual responses as well as saccadic eye movements. |
Erno J. Hermans; Jonathan W. Kanen; Arielle Tambini; Guillén Fernández; Lila Davachi; Elizabeth A. Phelps In: Cerebral Cortex, vol. 27, no. 5, pp. 3028–3041, 2017. @article{Hermans2017, After encoding, memories undergo a process of consolidation that determines long-term retention. For conditioned fear, animal models postulate that consolidation involves reactivations of neuronal assemblies supporting fear learning during postlearning " offline " periods. However, no human studies to date have investigated such processes, particularly in relation to long-term expression of fear. We tested 24 participants using functional MRI on 2 consecutive days in a fear conditioning paradigm involving 1 habituation block, 2 acquisition blocks, and 2 extinction blocks on day 1, and 2 re-extinction blocks on day 2. Conditioning blocks were preceded and followed by 4.5-min rest blocks. Strength of spontaneous recovery of fear on day 2 served as a measure of long-term expression of fear. Amygdala connectivity primarily with hippocampus increased progressively during postacquisition and postextinction rest on day 1. Intraregional multi-voxel correlation structures within amygdala and hippocampus sampled during a block of differential fear conditioning furthermore persisted after fear learning. Critically, both these main findings were stronger in participants who exhibited spontaneous recovery 24 h later. Our findings indicate that neural circuits activated during fear conditioning exhibit persistent postlearning activity that may be functionally relevant in promoting consolidation of the fear memory. |
Frouke Hermens The effects of social and symbolic cues on visual search: Cue shape trumps biological relevance Journal Article In: Psihologija, vol. 50, no. 2, pp. 117–140, 2017. @article{Hermens2017a, Arrow signs are often used in crowded environments such as airports to direct observers' attention to objects and areas of interest. Research with social and symbolic cues presented in isolation at fixation has suggested that social cues (such as eye gaze and pointing hands) are more effective in directing observers' attention than symbolic cues. The present work examines whether in visual search, social cues would therefore be more effective than arrows, by asking participants to locate target objects in crowded displays that were cued by eye-gaze, pointing hands or arrow cues. Results show an advantage for arrow cues, but only for arrow cues that stand out from the surroundings. The results confirm earlier suggestions that in extrafoveal vision cue shape trumps biological relevance. Eye movements suggest that these cueing effects rely predominantly on extrafoveal perception of the cues. |
Frouke Hermens The influence of social stigmas on observers' eye movements Journal Article In: Journal of Articles in Support of the Null Hypothesis, vol. 14, no. 1, pp. 1–18, 2017. @article{Hermens2017, Some social stigmas are associated with clear visual cues (facial scars, tattoos). Eye tracking has shown that such social stigmas influence the eye movements of other people. Other social stigmas often go without clearly visible cues (e.g., a mental illness or a criminal record). The present study investigates whether providing information about such stigmas affects eye movements of observers. Participants were presented with video clips and advance information about one of the actors that was either stigmatizing (related to mental health or a criminal past) or non-stigmatizing. The results show that eye movements towards the target actor were not systematically affected by stigmatizing advance information and were not associated with explicit attitudes from questionnaires. Results therefore suggest that stigmas without clear visual cues do not draw attention to or away from the person involved. |
Frouke Hermens; Markus Bindemann; A. Mike Burton Responding to social and symbolic extrafoveal cues: Cue shape trumps biological relevance Journal Article In: Psychological Research, vol. 81, no. 1, pp. 24–42, 2017. @article{Hermens2017b, Social cues presented at visual fixation have been shown to strongly influence an observer's attention and response selection. Here we ask whether the same holds for cues (initially) presented away from fixation, as cues are commonly perceived in natural vision. In six experiments, we show that extrafoveally presented cues with a distinct outline, such as pointing hands, rotated heads, and arrow cues result in strong cueing of responses (either to the cue itself, or a cued object). In contrast, cues without a clear outline, such as gazing eyes and direction words exert much weaker effects on participants' responses to a target cue. We also show that distraction effects on response times are relatively weak, but that strong interference effects can be obtained by measuring mouse trajectories. Eye tracking suggests that gaze cues are slower to respond to because their direction cannot easily be perceived in extrafoveal vision. Together, these data suggest that the strength of an extrafoveal cue is determined by the shape of the cue outline, rather than its biological relevance (i.e., whether the cue is provided by another human being), and that this shape effect is due to how easily the direction of a cue can be perceived in extrafoveal vision. |
Philipp N. Hesse; Frank Bremmer The SNARC effect in two dimensions: Evidence for a frontoparallel mental number plane Journal Article In: Vision Research, vol. 130, pp. 85–96, 2017. @article{Hesse2017a, The existence of an association between numbers and space is known for a long time. The most prominent demonstration of this relationship is the spatial numerical association of response codes (SNARC) effect, describing the fact that participants' reaction times are shorter with the left hand for small numbers and with the right hand for large numbers, when being asked to judge the parity of a number (Dehaene et al., J. Exp. Psychol., 122, 371–396, 1993). The SNARC effect is commonly seen as support for the concept of a mental number line, i.e. a mentally conceived line where small numbers are represented more on the left and large numbers are represented more on the right. The SNARC effect has been demonstrated for all three cardinal axes and recently a transverse SNARC plane has been reported (Chen et al., Exp. Brain Res., 233(5), 1519–1528, 2015). Here, by employing saccadic responses induced by auditory or visual stimuli, we measured the SNARC effect within the same subjects along the horizontal (HM) and vertical meridian (VM) and along the two interspersed diagonals. We found a SNARC effect along HM and VM, which allowed predicting the occurrence of a SNARC effect along the two diagonals by means of linear regression. Importantly, significant differences in SNARC strength were found between modalities. Our results suggest the existence of a frontoparallel mental number plane, where small numbers are represented left and down, while large numbers are represented right and up. Together with the recently described transverse mental number plane our findings provide further evidence for the existence of a three-dimensional mental number space. |
Philipp N. Hesse; Constanze Schmitt; Steffen Klingenhoefer; Frank Bremmer Preattentive processing of numerical visual information Journal Article In: Frontiers in Human Neuroscience, vol. 11, pp. 70, 2017. @article{Hesse2017, Humans can perceive and estimate approximate numerical information, even when accurate counting is impossible e.g. due to short presentation time. If the number of objects to be estimated is small, typically around one to four items, observers are able to give very fast and precise judgments with high confidence – an effect that is called subitizing. Due to its speed and effortless nature subitizing has usually been assumed to be preattentive, putting it into the same category as other low level visual features like color or orientation. More recently, however, a number of studies have suggested that subitizing might be dependent on attentional resources. In our current study we investigated the potentially preattentive nature of visual numerical perception in the subitizing range by means of EEG. We presented peripheral, task irrelevant sequences of stimuli consisting of a certain number of circular patches while participants were engaged in a demanding, non-numerical detection task at the fixation point drawing attention away from the number stimuli. Within a sequence of stimuli of a given number of patches (called ‘standards') we interspersed some stimuli of different numerosity (‘oddballs'). We compared the evoked responses to visually identical stimuli that had been presented in two different conditions, serving as standard in one condition and as oddball in the other. We found significant visual mismatch negativity (vMMN) responses over parieto-occipital electrodes. In addition to the ERP analysis, we performed a time-frequency analysis to investigate whether the vMMN was accompanied by additional oscillatory processes. We found a concurrent increase in evoked theta power of similar strength over both hemispheres. Our results provide clear evidence for a preattentive processing of numerical visual information in the subitizing range. |
Anne P. Hillstrom; Joice D. Segabinazi; Hayward J. Godwin; Simon P. Liversedge; Valerie Benson Cat and mouse search: The influence of scene and object analysis on eye movements when targets change locations during search Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 372, pp. 1–9, 2017. @article{Hillstrom2017, We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search. |
Stephen J. Hinde; Tim J. Smith; Iain D. Gilchrist In search of oculomotor capture during film viewing: Implications for the balance of top-down and bottom-up control in the saccadic system Journal Article In: Vision Research, vol. 134, pp. 7–17, 2017. @article{Hinde2017, In the laboratory, the abrupt onset of a visual distractor can generate an involuntary orienting response: this robust oculomotor capture effect has been reported in a large number of studies (e.g. Ludwig & Gilchrist, 2002; Theeuwes, Kramer, Hahn, & Irwin, 1998) suggesting it may be a ubiquitous part of more natural visual behaviour. However the visual stimuli used in these experiments have tended to be static and had none of the complexity, and dynamism of more natural visual environments. In addition, the primary task in the laboratory (typically visual search) can be tedious for the participants with participant's losing interest and becoming stimulus driven and more easily distracted. Both of these factors may have led to an overestimation of the extent to which oculomotor capture occurs and the importance of this phenomena in everyday visual behaviour. To address this issue, in the current series of studies we presented abrupt and highly salient visual distractors away from fixation while participants watched a film. No evidence of oculomotor capture was found. However, the distractor does effect fixation duration: we find an increase in fixation duration analogous to the remote distractor effect (Walker, Deubel, Schneider, & Findlay, 1997). These results suggest that during dynamic scene perception, the oculomotor system may be under far more top-down control than traditional laboratory based-tasks have previously suggested. |
Florión Goller; Donghoon Lee; Ulrich Ansorge; Soonja Choi Effects of language background on gaze behavior: A crosslinguistic comparison between Korean and German speakers Journal Article In: Advances in Cognitive Psychology, vol. 13, no. 4, pp. 267–279, 2017. @article{Goller2017, Languages differ in how they categorize spatial relations: While German differentiates between containment (in) and support (auf) with distinct spatial words-(a) den Kuli IN die Kappe stecken ("put pen in cap"); (b) die Kappe AUF den Kuli stecken ("put cap on pen")-Korean uses a single spatial word (kkita) collapsing (a) and (b) into one semantic category, particularly when the spatial enclosure is tight-fit. Korean uses a different word (i.e., netha) for loose-fits (e.g., apple in bowl). We tested whether these differences influence the attention of the speaker. In a crosslinguistic study, we compared native German speakers with native Korean speakers. Participants rated the similarity of two successive video clips of several scenes where two objects were joined or nested (either in a tight or loose manner). The rating data show that Korean speakers base their rating of similarity more on tight-versus loose-fit, whereas German speakers base their rating more on containment versus support (in vs. auf). Throughout the experiment, we also measured the participants' eye movements. Korean speakers looked equally long at the moving Figure object and at the stationary Ground object, whereas German speakers were more biased to look at the Ground object. Additionally, Korean speakers also looked more at the region where the two objects touched than did German speakers. We discuss our data in the light of crosslinguistic semantics and the extent of their influence on spatial cognition and perception. |
Frédéric Gosselin; Simon Faghel-Soubeyrand Stationary objects flashed periodically appear to move during smooth pursuit eye movement Journal Article In: Perception, vol. 46, no. 7, pp. 874–881, 2017. @article{Gosselin2017, We discovered that a white disc flashed twice at the same location appears to move during smooth pursuit eye tracking in the direction opposite to that of the eye movement. We called this novel phenomenon movement-induced apparent motion (MIAM). Using the method of constant stimuli, we measured the required displacement of the second appearance of the disc in the pursuit direction to null the effect during the closed-loop stage of smooth pursuit eye tracking. We observed a strong linear relationship between the points of subjective stationarity and the inter-stimuli intervals for four smooth pursuit eye movement speeds. The slopes and y-intercepts of these linear fits were well predicted by the hypothesis according to which subjects saw illusory motion from the first to the second retinal projections of the flashed disc during smooth pursuit eye movement, with no extra-retinal signal compensation. |
Harold H. Greene; James M. Brown Where did I come from? Where am I going? Functional differences in visual search fixation duration Journal Article In: Journal of Eye Movement Research, vol. 10, no. 1, pp. 1–13, 2017. @article{Greene2017, Real time simulation of visual search behavior can occur only if the control of fixation du- rations is sufficiently understood. Visual search studies have typically confounded pre- and post-saccadic influences on fixation duration. In the present study, pre- and post-saccadic influences on fixation durations were compared by considering saccade direction. Novel use of a gaze-contingent moving obstructer paradigm also addressed relative contributions of both influences to total fixation duration. As a function of saccade direction, pre-saccadic fixation durations exhibited a different pattern from post-saccadic fixation durations. Post- saccadic fixations were also more strongly influenced by peripheral obstruction than pre- saccadic fixation durations. This suggests that post-saccadic influences may contribute more to fixation durations than pre-saccadic influences. Together, the results demonstrate that it is insufficient to model the control of visual search fixation durations without consideration of pre- and post-saccadic influences. |
John A. Greenwood; Martin Szinte; Bilge Sayim; Patrick Cavanagh Variations in crowding, saccadic precision, and spatial localization reveal the shared topology of spatial vision Journal Article In: Proceedings of the National Academy of Sciences, vol. 114, no. 17, pp. E3573–E3582, 2017. @article{Greenwood2017, Visual sensitivity varies across the visual field in several characteristic ways. For example, sensitivity declines sharply in peripheral (vs. foveal) vision and is typically worse in the upper (vs. lower) visual field. These variations can affect processes ranging from acuity and crowding (the deleterious effect of clutter on object recognition) to the precision of saccadic eye movements. Here we examine whether these variations can be attributed to a common source within the visual system. We first compared the size of crowding zones with the precision of saccades using an oriented clock target and two adjacent flanker elements. We report that both saccade precision and crowded-target reports vary idiosyncratically across the visual field with a strong correlation across tasks for all participants. Nevertheless, both group-level and trial-by-trial analyses reveal dissociations that exclude a common representation for the two processes. We therefore compared crowding with two measures of spatial localization: Landolt-C gap resolution and three-dot bisection. Here we observe similar idiosyncratic variations with strong interparticipant correlations across tasks despite considerably finer precision. Hierarchical regression analyses further show that variations in spatial precision account for much of the variation in crowding, including the correlation between crowding and saccades. Altogether, we demonstrate that crowding, spatial localization, and saccadic precision show clear dissociations, indicative of independent spatial representations, whilst nonetheless sharing idiosyncratic variations in spatial topology. We propose that these topological idiosyncrasies are established early in the visual system and inherited throughout later stages to affect a range of higher-level representations. |
Joseph C. Griffis; Abdurahman S. Elkhetali; Wesley K. Burge; Richard H. Chen; Anthony D. Bowman; Jerzy P. Szaflarski; Kristina M. Visscher Retinotopic patterns of functional connectivity between V1 and large-scale brain networks during resting fixation Journal Article In: NeuroImage, vol. 146, pp. 1071–1083, 2017. @article{Griffis2017, Psychophysical and neurobiological evidence suggests that central and peripheral vision are specialized for different functions. This specialization of function might be expected to lead to differences in the large-scale functional interactions of early cortical areas that represent central and peripheral visual space. Here, we characterize differences in whole-brain functional connectivity among sectors in primary visual cortex (V1) corresponding to central, near-peripheral, and far-peripheral vision during resting fixation. Importantly, our analyses reveal that eccentricity sectors in V1 have different functional connectivity with non-visual areas associated with large-scale brain networks. Regions associated with the fronto-parietal control network are most strongly connected with central sectors of V1, regions associated with the cingulo-opercular control network are most strongly connected with near-peripheral sectors of V1, and regions associated with the default mode and auditory networks are most strongly connected with far-peripheral sectors of V1. Additional analyses suggest that similar patterns are present during eyes-closed rest. These results suggest that different types of visual information may be prioritized by large-scale brain networks with distinct functional profiles, and provide insights into how the small-scale functional specialization within early visual regions such as V1 relates to the large-scale organization of functionally distinct whole-brain networks. |
Elise Grison; Valérie Gyselinck; Jean Marie Burkhardt; Jan M. Wiener Route planning with transportation network maps: An eye-tracking study Journal Article In: Psychological Research, vol. 81, no. 5, pp. 1020–1034, 2017. @article{Grison2017, Planning routes using transportation network maps is a common task that has received little attention in the literature. Here, we present a novel eye-tracking paradigm to investigate psychological processes and mechanisms involved in such a route planning. In the experiment, participants were first presented with an origin and destination pair before we presented them with fictitious public transportation maps. Their task was to find the connecting route that required the minimum number of transfers. Based on participants' gaze behaviour, each trial was split into two phases: (1) the search for origin and destination phase, i.e., the initial phase of the trial until participants gazed at both origin and destination at least once and (2) the route planning and selection phase. Comparisons of other eye-tracking measures between these phases and the time to complete them, which depended on the complexity of the planning task, suggest that these two phases are indeed distinct and supported by different cognitive processes. For example, participants spent more time attending the centre of the map during the initial search phase, before directing their attention to connecting stations, where transitions between lines were possible. Our results provide novel insights into the psychological processes involved in route planning from maps. The findings are discussed in relation to the current theories of route planning. |
Axel Grzymisch; Cathleen Grimsen; Udo A. Ernst Contour integration in dynamic scenes: Impaired detection performance in extended presentations Journal Article In: Frontiers in Psychology, vol. 8, pp. 1501, 2017. @article{Grzymisch2017, Since scenes in nature are highly dynamic, perception requires an on-going and robust 5 integration of local information into global representations. In vision, contour integration (CI) 6 is one of these tasks, and it is performed by our brain in a seemingly effortless manner. 7 Following the rule of good continuation, oriented line segments are linked into contour percepts, 8 thus supporting important visual computations such as the detection of object boundaries. 9 This process has been studied almost exclusively using static stimuli, raising the question of 10 whether the observed robustness and “pop-out” quality of CI carries over to dynamic scenes. 11 We investigate contour detection in dynamic stimuli where targets appear at random times by 12 Gabor elements aligning themselves to form contours. In briefly presented displays (230ms), 13 a situation comparable to classical paradigms in CI, performance is about 87%. Surprisingly, 14 we find that detection performance decreases to 67% in extended presentations (about 1.9- 15 3.8s) for the same target stimuli. In order to observe the same reduction with briefly presented 16 stimuli, presentation time has to be drastically decreased to intervals as short as 50ms. Cueing a 17 specific contour position or shape helps in partially compensating this deterioration, and only in 18 extended presentations combining a location and a shape cue was more efficient than providing 19 a single cue. Our findings challenge the notion of CI as a mainly stimulus-driven process leading 20 to pop-out percepts, indicating that top-down processes play a much larger role in supporting 21 fundamental integration processes in dynamic scenes than previously thought. |
Maria J. S. Guerreiro; Pascal W. M. Van Gerven Disregarding hearing loss leads to overestimation of age-related cognitive decline Journal Article In: Neurobiology of Aging, vol. 56, pp. 180–189, 2017. @article{Guerreiro2017, Aging is associated with cognitive and sensory decline. While several studies have indicated greater cognitive decline among older adults with hearing loss, the extent to which age-related differences in cognitive processing may have been overestimated due to group differences in sensory processing has remained unclear. We addressed this question by comparing younger adults, older adults with good hearing, and older adults with poor hearing in several cognitive domains: working memory, selective attention, processing speed, inhibitory control, and abstract reasoning. Furthermore, we examined whether sensory-related cognitive decline depends on cognitive demands and on the sensory modality used for assessment. Our results revealed that age-related cognitive deficits in most cognitive domains varied as a function of hearing loss, being more pronounced in older adults with poor hearing. Furthermore, sensory-related cognitive decline was observed across different levels of cognitive demands and independent of the sensory modality used for cognitive assessment, suggesting a generalized effect of age-related hearing loss on cognitive functioning. As most cognitive aging studies have not taken sensory acuity into account, age-related cognitive decline may have been overestimated. |
Tjerk P. Gutteling; D. J. L. G. Schutter; W. Pieter Medendorp Alpha-band transcranial alternating current stimulation modulates precision, but not gain during whole-body spatial updating Journal Article In: Neuropsychologia, vol. 106, pp. 52–59, 2017. @article{Gutteling2017, Spatial updating is essential to maintain an accurate representation of our visual environment when we move. A neural mechanism that contributes to this ability is called remapping: the transfer of visual information from neural populations that code a location before the motion to those that encode it after the motion. While there is ample evidence for neural remapping in conjunction with eye movements, only recent findings suggest a role of this mechanism for whole-body motion updating, based on the observation that alpha band (10 Hz) activity is selectively reorganized during remapping. This study tested whether alpha oscillations directly contribute to whole-body motion updating using transcranial alternating current stimulation (tACS). In a double blind sham controlled design, healthy volunteers received 10 Hz tACS at an intensity of 1 mA over either the left or right posterior hemisphere during a whole-body motion updating task. Updating performance was assessed psychometrically and indices of gain and precision were obtained. No tACS-related effects on updating gain were found, irrespective of whether the remapping was across or within the hemispheres. In contrast, updating precision was enhanced when a target representation had to be internally remapped to the stimulated hemisphere, but not in other remapping conditions. Our observations suggest that alpha band oscillations do not directly affect the transfer of target representations during remapping, but increase the fidelity of the updated representation by attenuating interference of afferent information. |
Nathalie Guyader; Alan Chauvin; Muriel Boucart; Carole Peyrin Do low spatial frequencies explain the extremely fast saccades towards human faces? Journal Article In: Vision Research, vol. 133, pp. 100–111, 2017. @article{Guyader2017, The visual perception of human faces by man is fast and efficient compared to that of other categories of objects. Using a saccadic choice task, recent studies showed that participants were able to initiate fast reliable saccades in just 100–110 ms toward an image of a human face, when this was presented alongside another image without a face. This extremely fast saccadic reaction time is barely predicted using classical models of visual perception. Thus, the present research investigates whether this result might be explained by the low spatial frequency content of images. Using the same paradigm, with two images simultaneously presented to the left or right visual fields, participants were asked to make a saccade towards a target image. The target was defined as an image belonging to one category: human face, animal or vehicle. The other image corresponded to the distractor and belongs to the other categories. We compared performance to saccade toward one category of target. The two images were displayed either in color, gray-level, low-pass filtered or high-pass filtered. As previous studies, we found that the shortest SRT was observed for saccades towards faces rather than towards animals or vehicles. Analysis of saccadic reaction time distributions showed that, in 130–140 ms, participants were able to make more correct than incorrect saccades towards faces for unfiltered (color and gray-level) and low-pass filtered images whereas they needed more time for high-pass filtered images. In contrast, the minimum time participants needed to correctly saccade towards animals and vehicles was longer for low-pass and high-pass filtered than for unfiltered images. The analysis of the image statistics in the Fourier domain revealed that the amplitude spectrum of faces was mainly contained in the low spatial frequencies. Consistent with a coarse-to-fine processing of visual information, our results suggest that extremely fast saccades towards faces could be initiated by low spatial frequencies. |
Ruth Filik; Emily Brightman; Chloe Gathercole; Hartmut Leuthold The emotional impact of verbal irony: Eye-tracking evidence for a two-stage process Journal Article In: Journal of Memory and Language, vol. 93, pp. 193–202, 2017. @article{Filik2017, In this paper we investigate the socio-emotional functions of verbal irony. Specifically, we use eye-tracking while reading to assess moment-to-moment processing of a character's emotional response to ironic versus literal criticism. In Experiment 1, participants read stories describing a character being upset following criticism from another character. Results showed that participants initially more easily integrated a hurt response following ironic criticism; but later found it easier to integrate a hurt response following literal criticism. In Experiment 2, characters were instead described as having an amused response, which participants ultimately integrated more easily following ironic criticism. From this we propose a two-stage process of emotional responding to irony: While readers may initially expect a character to be more hurt by ironic than literal criticism, they ultimately rationalize ironic criticism as being less hurtful, and more amusing. |
Nonie J. Finlayson; Julie D. Golomb 2D location biases depth-from-disparity judgments but not vice versa Journal Article In: Visual Cognition, vol. 25, no. 9-10, pp. 841–852, 2017. @article{Finlayson2017a, Visual cognition in our 3D world requires understanding how we accurately localize objects in 2D and depth, and what influence both types of location information have on visual processing. Spatial location is known to play a special role in visual processing, but most of these findings have focused on the special role of 2D location. One such phenomena is the spatial congruency bias, where 2D location biases judgments of object features but features do not bias location judgments. This paradigm has recently been used to compare different types of location information in terms of how much they bias different types of features. Here we used this paradigm to ask a related question: whether 2D and depth-from-disparity location bias localization judgments for each other. We found that presenting two objects in the same 2D location biased position-in-depth judgments, but presenting two objects at the same depth (disparity) did not bias 2D location judgments. We conclude that an object's 2D location may be automatically incorporated into perception of its depth location, but not vice versa, which is consistent with a fundamentally special role for 2D location in visual processing. |
Eugen Fischer; Paul E. Engelhardt Stereotypical inferences: Philosophical relevance and psycholinguistic toolkit Journal Article In: Ratio, vol. 30, no. 4, pp. 411–442, 2017. @article{Fischer2017, Stereotypes shape inferences in philosophical thought, political discourse, and everyday life. These inferences are routinely made when thinkers engage in language comprehension or production: We make them whenever we hear, read, or formulate stories, reports, philosophical case-descriptions, or premises of arguments – on virtually any topic. These inferences are largely automatic: largely unconscious, non-intentional, and effortless. Accordingly, they shape our thought in ways we can properly understand only by complementing traditional forms of philosophical analysis with experimental methods from psycholinguistics. This paper seeks, first, to bring out the wider philosophical relevance of stereotypical inference, well beyond familiar topics like gender and race. Second, we wish to provide (experimental) philosophers with a toolkit to experimentally study these ubiquitous inferences and what intuitions they may generate. This paper explains what stereotypes are (Section 1), and why they matter to current and traditional concerns in philosophy – experimental, analytic, and applied (Section 2). It then assembles a psycholinguistic toolkit and demonstrates through two studies (Sections 3–4) how potentially questionnairebased measures (plausibility-ratings) can be combined with process measures (reaction times and pupillometry) to garner evidence for specific stereotypical inferences and study when they ‘go through' and influence our thinking. |
Daniel Fiset; Caroline Blais; Jessica Royer; Anne Raphäelle Richoz; Gabrielle Dugas; Roberto Caldara Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia Journal Article In: Social Cognitive and Affective Neuroscience, vol. 12, no. 8, pp. 1334–1341, 2017. @article{Fiset2017, Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed.We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images. |
Geoffrey Fisher An attentional drift diffusion model over binary-attribute choice Journal Article In: Cognition, vol. 168, pp. 34–45, 2017. @article{Fisher2017, In order to make good decisions, individuals need to identify and properly integrate information about various attributes associated with a choice. Since choices are often complex and made rapidly, they are typically affected by contextual variables that are thought to influence how much attention is paid to different attributes. I propose a modification of the attentional drift-diffusion model, the binary-attribute attentional drift diffusion model (baDDM), which describes the choice process over simple binary-attribute choices and how it is affected by fluctuations in visual attention. Using an eye-tracking experiment, I find the baDDM makes accurate quantitative predictions about several key variables including choices, reaction times, and how these variables are correlated with attention to two attributes in an accept-reject decision. Furthermore, I estimate an attribute-based fixation bias that suggests attention to an attribute increases its subjective weight by 5%, while the unattended attribute's weight is decreased by 10%. |
Aleya Flechsenhar; Matthias Gamer Top-down influence on gaze patterns in the presence of social features Journal Article In: PLoS ONE, vol. 12, no. 8, pp. e0183799, 2017. @article{Flechsenhar2017, Visual saliency maps reflecting locations that stand out from the background in terms of their low-level physical features have proven to be very useful for empirical research on attentional exploration and reliably predict gaze behavior. In the present study we tested these predictions for socially relevant stimuli occurring in naturalistic scenes using eye tracking. We hypothesized that social features (i.e. human faces or bodies) would be processed preferentially over non-social features (i.e. objects, animals) regardless of their low-level saliency. To challenge this notion, we included three tasks that deliberately addressed non-social attributes. In agreement with our hypothesis, social information, especially heads, was preferentially attended compared to highly salient image regions across all tasks. Social information was never required to solve a task but was regarded nevertheless. More so, after completing the task requirements, viewing behavior reverted back to that of free-viewing with heavy prioritization of social features. Additionally, initial eye movements reflecting potentially automatic shifts of attention, were predominantly directed towards heads irrespective of top-down task demands. On these grounds, we suggest that social stimuli may provide exclusive access to the priority map, enabling social attention to override reflexive and controlled attentional processes. Furthermore, our results challenge the generalizability of saliency-based attention models. |
Joshua J. Foster; Emma M. Bsales; Russell J. Jaffe; Edward Awh Alpha-band activity reveals spontaneous representations of spatial position in visual working memory Journal Article In: Current Biology, vol. 27, no. 20, pp. 3216–3223, 2017. @article{Foster2017, An emerging view suggests that spatial position is an integral component of working memory (WM), such that non-spatial features are bound to locations regardless of whether space is relevant [1, 2]. For instance, past work has shown that stimulus position is spontaneously remembered when non-spatial fea- tures are stored. Item recognition is enhanced when memoranda appear at the same location where they were encoded [3–5], and accessing non-spatial infor- mation elicits shifts of spatial attention to the original position of the stimulus [6, 7]. However, these find- ings do not establish that a persistent, active repre- sentation of stimulus position is maintained in WM because similar effects have also been documented following storage in long-termmemory [8, 9]. Here we show that the spatial position of the memorandum is actively coded by persistent neural activity during a non-spatial WM task. We used a spatial encoding model in conjunction with electroencephalogram (EEG) measurements of oscillatory alpha-band (8– 12 Hz) activity to track active representations of spatial position. The position of the stimulus varied trial to trial but was wholly irrelevant to the tasks. We nevertheless observed active neural representa- tions of the original stimulus position that persisted throughout the retention interval. Further experi- ments established that these spatial representations are dependent on the volitional storage of non- spatial features rather than being a lingering effect of sensory energy or initial encoding demands. These findings provide strong evidence that online spatial representations are spontaneously main- tained in WM—regardless of task relevance—during the storage of non-spatial features. |
Tom Foulsham; Alan Kingstone Are fixations in static natural scenes a useful predictor of attention in the real world? Journal Article In: Canadian Journal of Experimental Psychology, vol. 71, no. 2, pp. 172–181, 2017. @article{Foulsham2017, Research investigating scene perception normally involves laboratory experiments using static images. Much has been learned about how observers look at pictures of the real world and the attentional mechanisms underlying this behaviour. However, the use of static, isolated pictures as a proxy for studying everyday attention in real environments has led to the criticism that such experiments are artificial. We report a new study that tests the extent to which the real world can be reduced to simpler laboratory stimuli. We recorded the gaze of participants walking on a university campus with a mobile eye tracker, and then showed static frames from this walk to new participants, in either a random or sequential order. The aim was to compare the gaze of participants walking in the real environment with fixations on pictures of the same scene. The data show that picture order affects interobserver fixation consistency and changes looking patterns. Critically, while fixations on the static images overlapped significantly with the actual real-world eye movements, they did so no more than a model that assumed a general bias to the centre. Remarkably, a model that simply takes into account where the eyes are normally positioned in the head—independent of what is actually in the scene—does far better than any other model. These data reveal that viewing patterns to static scenes are a relatively poor proxy for predicting real world eye movement behaviour, while raising intriguing possibilities for how to best measure attention in everyday life. |
Amber M. Fyall; Yasmine El-Shamayleh; Hannah Choi; Eric Shea-Brown; Anitha Pasupathy Dynamic representation of partially occluded objects in primate prefrontal and visual cortex Journal Article In: eLife, vol. 6, pp. 1–25, 2017. @article{Fyall2017, Successful recognition of partially occluded objects is presumed to involve dynamic interactions between brain areas responsible for vision and cognition, but neurophysiological evidence for the involvement of feedback signals is lacking. Here, we demonstrate that neurons in the ventrolateral prefrontal cortex (vlPFC) of monkeys performing a shape discrimination task respond more strongly to occluded than unoccluded stimuli. In contrast, neurons in visual area V4 respond more strongly to unoccluded stimuli. Analyses of V4 response dynamics reveal that many neurons exhibit two transient response peaks, the second of which emerges after vlPFC response onset and displays stronger selectivity for occluded shapes. We replicate these findings using a model of V4/vlPFC interactions in which occlusion-sensitive vlPFC neurons feed back to shape- selective V4 neurons, thereby enhancing V4 responses and selectivity to occluded shapes. These results reveal how signals from frontal and visual cortex could interact to facilitate object recognition under occlusion. |
Marc Galanter; Zoran Josipovic; Helen Dermatis; Jochen Weber; Mary Alice Millard An initial fMRI study on neural correlates of prayer in members of Alcoholics Anonymous Journal Article In: American Journal of Drug and Alcohol Abuse, vol. 43, no. 1, pp. 44–54, 2017. @article{Galanter2017, Background: Many individuals with alcohol-use disorders who had experienced alcohol craving before joining Alcoholics Anonymous (AA) report little or no craving after becoming long-term members. Their use of AA prayers may contribute to this. Neural mechanisms underlying this process have not been delineated. Objective: To define experiential and neural correlates of diminished alcohol craving followingAA prayers amongmembers with long-termabstinence. Methods: Twenty AAmembers with long-term abstinence participated. Self-report measures and functional magnetic resonance imaging of differential neural response to alcohol-craving-inducing images were obtained in three conditions: after reading of AA prayers, after reading irrelevant news, and with passive viewing. Random-effects robust regressions were computed for the main effect (prayer > passive + news) and for estimating the correlations between themain effect and the self-report measures. Results: Compared to the other two conditions, the prayer condition was characterized by: less self-reported craving; increased activation in left-anterior middle frontal gyrus, left superior parietal lobule, bilateral precuneus, and bilateral posterior middle temporal gyrus. Craving following prayer was inversely correlated with activation in brain areas associated with self-referential processing and the default mode network, and with characteristics reflecting AA program involvement. Conclusion:AA members' prayer was asso- ciated with a relative reduction in self-reported craving and with concomitant engagement of neural mechanisms that reflect control of attention and emotion. These findings suggest neural processes underlying the apparent effectiveness of AA prayer. |
Christine M. Gamble; Joo-Hyun Song Dynamic modulation of illusory and physical target size on separate and coordinated eye and hand movements Journal Article In: Journal of Vision, vol. 17, no. 3, pp. 1–23, 2017. @article{Gamble2017, In everyday behavior, two of the most common visually guided actions-eye and hand movements-can be performed independently, but are often synergistically coupled. In this study, we examine whether the same visual representation is used for different stages of saccades and pointing, namely movement preparation and execution, and whether this usage is consistent between independent and naturalistic coordinated eye and hand movements. To address these questions, we used the Ponzo illusion to dissociate the perceived and physical sizes of visual targets and measured the effects on movement preparation and execution for independent and coordinated saccades and pointing. During independent movements, we demonstrated that both physically and perceptually larger targets produced faster preparation for both effectors. Furthermore, participants who showed a greater influence of the illusion on saccade preparation also showed a greater influence on pointing preparation, suggesting that a shared mechanism involved in preparation across effectors is influenced by illusions. However, only physical but not perceptual target sizes influenced saccade and pointing execution. When pointing was coordinated with saccades, we observed different dynamics: pointing no longer showed modulation from illusory size, while saccades showed illusion modulation for both preparation and execution. Interestingly, in independent and coordinated movements, the illusion modulated saccade preparation more than pointing preparation, with this effect more pronounced during coordination. These results suggest a shared mechanism, dominated by the eyes, may underlie visually guided action preparation across effectors. Furthermore, the influence of illusions on action may operate within such a mechanism, leading to dynamic interactions between action modalities based on task demands. |
Selam W. Habtegiorgis; Katharina Rifai; Markus Lappe; Siegfried Wahl Adaptation to skew distortions of natural scenes and retinal specificity of its aftereffects Journal Article In: Frontiers in Psychology, vol. 8, pp. 1158, 2017. @article{Habtegiorgis2017, Image skew is one of the prominent distortions that exist in optical elements, such as in spectacle lenses. The present study evaluates adaptation to image skew in dynamic natural images. Moreover, the cortical levels involved in skew coding were probed using retinal specificity of skew ad=ptation aftereffects. Left and right skewed natural image sequences were shown to observers as adapting stimuli. The point of subjective equality (PSE), i.e., the skew amplitude in simple geometrical patterns that is perceived to be unskewed, was used to quantify the aftereffect of each adapting skew direction. The PSE, in a two-alternative forced choice paradigm, shifted toward the adapting skew direction. Moreover, significant adaptation aftereffects were obtained not only at adapted, but also at non-adapted retinal locations during fixation. Skew adaptation information was transferred partially to non-adapted retinal locations. Thus, adaptation to skewed natural scenes induces coordinated plasticity in lower and higher cortical areas of the visual pathway. |
Dorothea Hämmerer; Alexandra Hopkins; Matthew J. Betts; Anne Maaß; Raymond J. Dolan; Emrah Düzel In: Neurobiology of Aging, vol. 58, pp. 129–139, 2017. @article{Haemmerer2017, A better memory for negative emotional events is often attributed to a conjoint impact of increased arousal and noradrenergic modulation (NA). A decline in NA during aging is well documented but its impact on memory function during aging is unclear. Using pupil diameter (PD) as a proxy for NA, we examined age differences in memory for negative events in younger (18–30 years) and older (62–83 years) adults based on a segregation of early arousal to negative events, and later retrieval-related PD responses. In keeping with the hypothesis of reduced age-related NA influences, older adults showed attenuated induced PD responses to negative emotional events. The findings highlight a likely contribution of NA to negative emotional memory, mediated via arousal that may be compromised with aging. |
Paul Hands; Jenny C. A. Read True stereoscopic 3D cannot be simulated by shifting 2D content off the screen plane Journal Article In: Displays, vol. 48, pp. 35–40, 2017. @article{Hands2017, Generating stereoscopic 3D (S3D) content is expensive, so industry producers sometimes attempt to save money by including brief sections of 2D content displayed with a uniform disparity, i.e. the 2D image is geometrically shifted behind the screen plane. This manipulation is believed to produce an illusion of depth which, while not as powerful as true S3D, is nevertheless more compelling than simple 2D. Our study examined whether this belief is correct. 30 s clips from a nature documentary were shown in the original S3D, in ordinary 2D and in shifted versions of S3D and 2D. Participants were asked to determine the impression of depth on a 7 point Likert scale. There was a clear and highly significant difference between the S3D depth perception (mean 6.03) and the shifted 2D depth perception (mean 4.13) (P = 0.002, ANOVA). There was no difference between ordinary 2D presented on the screen plane, and the shifted 2D. We conclude that the shifted 2D method not only fails to mimic the depth effect of true S3D, it in fact has no benefit over ordinary 2D in terms of the depth illusion created. This could impact viewing habits of people who notice the difference in depth quality. |
Anthony M. Harris; Roger W. Remington Contextual cueing improves attentional guidance, even when guidance is supposedly optimal Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 5, pp. 926–940, 2017. @article{Harris2017, Visual search through previously encountered contexts typically produces reduced reaction times compared with search through novel contexts. This contextual cueing benefit is well established, but there is debate regarding its underlying mechanisms. Eye-tracking studies have consistently shown reduced number of fixations with repetition, supporting improvements in attentional guidance as the source of contextual cueing. However, contextual cueing benefits have been shown in conditions in which attentional guidance should already be optimal—namely, when attention is captured to the target location by an abrupt onset, or under pop-out conditions. These results have been used to argue for a response-related account of contextual cueing. Here, we combine eye tracking with response time to examine the mechanisms behind contextual cueing in spatially cued and pop-out conditions. Three experiments find consistent response time benefits with repetition, which appear to be driven almost entirely by a reduction in number of fixations, supporting improved attentional guidance as the mechanism behind contextual cueing. No differences were observed in the time between fixating the target and responding—our proxy for response related processes. Furthermore, the correlation between contextual cueing magnitude and the reduction in number of fixations on repeated contexts approaches 1. These results argue strongly that attentional guidance is facilitated by familiar search contexts, even when guidance is near-optimal. |
Siobhán Harty; Peter R. Murphy; Ian H. Robertson; Redmond G. O'Connell Parsing the neural signatures of reduced error detection in older age Journal Article In: NeuroImage, vol. 161, pp. 43–55, 2017. @article{Harty2017, Recent work has demonstrated that explicit error detection relies on a neural evidence accumulation process that can be traced in the human electroencephalogram (EEG). Here, we sought to establish the impact of natural aging on this process by recording EEG from young (18–35 years) and older adults (65–88 years) during the performance of a Go/No-Go paradigm in which participants were required to overtly signal their errors. Despite performing the task with equivalent accuracy, older adults reported substantially fewer errors, and the timing of their reports were both slower and more variable. These behavioral differences were linked to three key neurophysiological changes reflecting distinct parameters of the error detection decision process: a reduction in medial frontal delta/theta (2–7 Hz) activity, indicating diminished top-down input to the decision process; a slower rate of evidence accumulation as indexed by the rate of rise of a centro-parietal signal, known as the error positivity; and a higher motor execution threshold as indexed by lateralized beta-band (16–30 Hz) activity. Our data provide novel insight into how the natural aging process affects the neural underpinnings of error detection. |
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd In: Journal of Management in Engineering, vol. 33, no. 5, pp. 1–17, 2017. @article{Hasanzadeh2017a, Although several studies have highlighted the importance of attention in reducing the number of injuries in the construction industry, few have attempted to empirically measure the attention of construction workers. One technique that can be used to measure worker attention is eye tracking, which is widely accepted as the most direct and continuous measure of attention because where one looks is highly correlated with where one is focusing his or her attention. Thus, with the fundamental objective of measuring the impacts of safety knowledge (specifically, training, work experience, and injury exposure) on construction workers' attentional allocation, this study demonstrates the application of eye tracking to the realm of construction safety practices. To achieve this objective, a laboratory experiment was designed in which participants identified safety hazards presented in 35 construction site images ordered randomly, each of which showed multiple hazards varying in safety risk. During the experiment, the eye movements of 27 construction workers were recorded using a head-mounted EyeLink II system. The impact of worker safety knowledge in terms of training, work experience, and injury exposure (independent variables) on eye-tracking metrics (dependent variables) was then assessed by implementing numerous permutation simulations. The results show that tacit safety knowledge acquired from work experience and injury exposure can significantly improve construction workers' hazard detection and visual search strategies. The results also demonstrate that (1) there is minimal difference, with or without the Occupational Safety and Health Administration 10-h certificate, in workers' search strategies and attentional patterns while exposed to or seeing hazardous situations; (2) relative to less experienced workers (<5 years), more experienced workers (>10 years) need less processing time and deploy more frequent short fixations on hazardous areas to maintain situational awareness of the environment; and (3) injury exposure significantly impacts a worker's visual search strategy and attentional allocation. In sum, practical safety knowledge and judgment on a jobsite requires the interaction of both tacit and explicit knowledge gained through work experience, injury exposure, and interactive safety training. This study significantly contributes to the literature by demonstrating the potential application of eye-tracking technology in studying the attentional allocation of construction workers. Regarding practice, the results of the study show that eye tracking can be used to improve worker training and preparedness, which will yield safer working conditions, detect at-risk workers, and improve the effectiveness of safety-training programs. |
Sogand Hasanzadeh; Behzad Esmaeili; Michael D. Dodd Impact of construction workers' hazard identification skills on their visual attention Journal Article In: Journal of Construction Engineering and Management, vol. 143, no. 10, pp. 1–16, 2017. @article{Hasanzadeh2017, Eye-movement metrics have been shown to correlate with attention and, therefore, represent a means of identifying and analyzing an individual's cognitive processes. Human errors–such as failure to identify a hazard–are often attributed to a worker's lack of attention. Piecemeal attempts have been made to investigate the potential of harnessing eye movements as predictors of human error (e.g., failure to identify a hazard) in the construction industry, although more attempts have investigated human error via subjective measurements. To address this knowledge gap, the present study harnessed eye-tracking technology to evaluate the impacts of workers' hazard-identification skills on their attentional distributions and visual search strategies. To achieve this objective, an experiment was designed in which the eye movements of 31 construction workers were tracked while they searched for hazards in 35 randomly ordered construction scenario images. Workers were then divided into three groups on the basis of their hazard identification performance. Three fixation-related metrics–fixation count, dwell-time percentage, and run count–were analyzed during the eye-tracking experiment for each group (low, medium, and high hazard-identification skills) across various types of hazards. Then, multivariate ANOVA (MANOVA) was used to evaluate the impact of workers' hazard-identification skills on their visual attention. To further investigate the effect of hazard identification skills on the dependent variables (eye movement metrics), two distinct processes followed: separate ANOVAs on each of the dependent variables, and a discriminant function analysis. The analyses indicated that hazard identification skills significantly impact workers' visual search strategies: workers with higher hazard-identification skills had lower dwell-time percentages on ladder-related hazards; higher fixation counts on fall-to-lower-level hazards; and higher fixation counts and run counts on fall-protection systems, struck-by, housekeeping, and all hazardous areas combined. Among the eye-movement metrics studied, fixation count had the largest standardized coefficient in all canonical discriminant functions, which implies that this eye-movement metric uniquely discriminates workers with high hazard-identification skills and at-risk workers. Because discriminant function analysis is similar to regression, discriminant function (linear combinations of eye-movement metrics) can be used to predict workers' hazard-identification capabilities. In conclusion, this study provides a proof of concept that certain eye- movement metrics are predictive indicators of human error due to attentional failure. These outcomes stemmed from a laboratory setting, and, foreseeably, safety managers in the future will be able to use these findings to identify at-risk construction workers, pinpoint required safety training, measure training effectiveness, and eventually improve future personal protective equipment to measure construction workers' situation awareness in real time. |
S. A. Hassani; Mariann Oemisch; M. Balcarras; Stephanie Westendorff; S. Ardid; M. A. Meer; P. Tiesinga; T. Womelsdorf In: Scientific Reports, vol. 7, pp. 40606, 2017. @article{Hassani2017, Noradrenaline is believed to support cognitive flexibility through the alpha 2A noradrenergic receptor (a2A-NAR) acting in prefrontal cortex. Enhanced flexibility has been inferred from improved working memory with the a2A-NA agonist Guanfacine. But it has been unclear whether Guanfacine improves specific attention and learning mechanisms beyond working memory, and whether the drug effects can be formalized computationally to allow single subject predictions. We tested and confirmed these suggestions in a case study with a healthy nonhuman primate performing a feature-based reversal learning task evaluating performance using Bayesian and Reinforcement learning models. In an initial dose-testing phase we found a Guanfacine dose that increased performance accuracy, decreased distractibility and improved learning. In a second experimental phase using only that dose we examined the faster feature-based reversal learning with Guanfacine with single-subject computational modeling. Parameter estimation suggested that improved learning is not accounted for by varying a single reinforcement learning mechanism, but by changing the set of parameter values to higher learning rates and stronger suppression of non-chosen over chosen feature information. These findings provide an important starting point for developing nonhuman primate models to discern the synaptic mechanisms of attention and learning functions within the context of a computational neuropsychiatry framework. |
Taylor R. Hayes; John M. Henderson Scan patterns during real-world scene viewing predict individual differences in cognitive capacity Journal Article In: Journal of Vision, vol. 17, no. 5, pp. 1–17, 2017. @article{Hayes2017, From the earliest recordings of eye movements during active scene viewing to the present day, researchers have commonly reported individual differences in eye movement scan patterns under constant stimulus and task demands. These findings suggest viewer individual differences may be important for understanding gaze control during scene viewing. However, the relationship between scan patterns and viewer individual differences during scene viewing remains poorly understood because scan patterns are difficult to analyze. The present study uses a powerful technique called Successor Representation Scanpath Analysis (Hayes, Petrov, & Sederberg, 2011, 2015) to quantify the strength of the association between individual differences in scan patterns during real-world scene viewing and individual differences in viewer intelligence, working memory capacity, and speed of processing. The results of this analysis revealed individual differences in scan patterns that explained more than 40% of the variance in viewer intelligence and working memory capacity measures, and more than a third of the variance in speed of processing measures. The theoretical implications of our findings for models of gaze control and avenues for future individual differences research are discussed. |
Dana A. Hayward; Willa Voorhies; Jenna L. Morris; Francesca Capozzi; Jelena Ristic Staring reality in the face: A comparison of social attention across laboratory and real world measures suggests little common ground Journal Article In: Canadian Journal of Experimental Psychology, vol. 71, no. 3, pp. 212–225, 2017. @article{Hayward2017, The ability to attend to someone else's gaze is thought to represent 1 of the essential building blocks of the human sociocognitive system. This behavior, termed social attention, has traditionally been assessed using laboratory procedures in which participants' response time and/or accuracy performance indexes attentional function. Recently, a parallel body of emerging research has started to examine social attention during real life social interactions using naturalistic and observational methodologies. The main goal of the present work was to begin connecting these two lines of inquiry. To do so, here we operationalized, indexed, and measured the engagement and shifting components of social attention using covert and overt measures. These measures were obtained during an unconstrained real-world social interaction and during a typical laboratory social cuing task. Our results indicated reliable and overall similar indices of social attention engagement and shifting within each task. However, these measures did not relate across the 2 tasks. We discuss these results as potentially reflecting the differences in social attention mechanisms, the specificity of the cuing task's measurement, as well as possible general dissimilarities with respect to context, task goals, and/or social presence. |
David Garcia-Burgos; Junpeng Lao; Simone Munsch; Roberto Caldara Visual attention to food cues is differentially modulated by gustatory-hedonic and post-ingestive attributes Journal Article In: Food Research International, vol. 97, pp. 199–208, 2017. @article{GarciaBurgos2017, Although attentional biases towards food cues may play a critical role in food choices and eating behaviours, it remains largely unexplored which specific food attribute governs visual attentional deployment. The allocation of visual attention might be modulated by anticipatory postingestive consequences, from taste sensations derived from eating itself, or both. Therefore, in order to obtain a comprehensive understanding of the attentional mechanisms involved in the processing of food-related cues, we recorded the eye movements to five categories of well-standardised pictures: neutral non-food, high-calorie, good taste, distaste and dangerous food. In particular, forty-four healthy adults of both sexes were assessed with an antisaccade paradigm (which requires the generation of a voluntary saccade and the suppression of a reflex one) and a free viewing paradigm (which implies the free visual exploration of two images). The results showed that observers directed their initial fixations more often and faster on items with high survival relevance such as nutrient and possible dangers; although an increase in antisaccade error rates was only detected for high-calorie items. We also found longer prosaccade fixation duration and initial fixation duration bias score related to maintained attention towards high-calorie, good taste and danger categories; while shorter reaction times to correct an incorrect prosaccade related to less difficulties in inhibiting distasteful images. Altogether, these findings suggest that visual attention is differentially modulated by both the accepted and rejected food attributes, but also that normal-weight, non-eating disordered individuals exhibit enhanced approach to food's postingestive effects and avoidance of distasteful items (such as bitter vegetables or pungent products). |
Ray Garza; Roberto R. Heredia; Anna B. Cieślicka An eye tracking examination of men's attractiveness by conceptive risk women Journal Article In: Evolutionary Psychology, vol. 15, no. 1, pp. 1–11, 2017. @article{Garza2017, Previous research has indicated that women prefer men who exhibit an android physical appearance where fat distribution is deposited on the upper body (i.e., shoulders and arms) and abdomen. This ideal physical shape has been associated with perceived dominance, health, and immunocompetence. Although research has investigated attractability of men with these ideal characteristics, research on how women visually perceive these characteristics is limited. The current study investigated visual perception and attraction toward men in Hispanic women of Mexican American descent. Women exposed to a front-posed image, where the waist-to-chest ratio (WCR) and hair distribution were manipulated, rated men's body image associated with upper body strength (low WCR 0.7) as more attractive. Additionally, conceptive risk did not play a strong role in attractiveness and visual attention. Hair distribution did not contribute to increased ratings of attraction but did contribute to visual attraction when measuring total time where men with both facial and body hair were viewed longer. These findings suggest that physical characteristics in men exhibiting upper body strength and dominance are strong predictors of visual attraction. |
Nicholas Gaspelin; Carly J. Leonard; Steven J. Luck Suppression of overt attentional capture by salient-but-irrelevant color singletons Journal Article In: Attention, Perception, and Psychophysics, vol. 79, no. 1, pp. 45–62, 2017. @article{Gaspelin2017, For more than 2 decades, researchers have debated the nature ofcognitive control in the guidance ofvisual atten- tion. Stimulus-driven theories claim that salient stimuli auto- matically capture attention, whereas goal-driven theories pro- pose that an individual's attentional control settings determine whether salient stimuli capture attention. In the current study, we tested a hybrid account called the signal suppression hy- pothesis, which claims that all stimuli automatically generate a salience signal but that this signal can be actively suppressed by top-down attentional mechanisms. Previous behavioral and electrophysiological research has shown that participants can suppress covert shifts ofattention to salient-but-irrelevant col- or singletons. In this study, we used eye-tracking methods to determine whether participants can also suppress overt shifts ofattention to irrelevant singletons. We found that under con- ditions that promote active suppression of the irrelevant sin- gletons, overt attention was less likely to be directed toward the salient distractors than toward nonsalient distractors. This result provides direct evidence that people can suppress salient-but-irrelevant singletons below baseline levels. |
Alexander Geiger; Eva Niessen; Gary Bente; Kai Vogeley Eyes versus hands: How perceived stimuli influence motor actions Journal Article In: PLoS ONE, vol. 12, no. 7, pp. e0180780, 2017. @article{Geiger2017, Many studies showed that biological (e.g., gaze-shifts or hand movements) and non-biological stimuli (e.g., arrows or moving points) redirect attention. Biological stimuli seem to be more suitable than non-biological to perform this task. However, the question remains if biological stimuli do have different influences on redirecting attention and if this property is dependent on how we react to those stimuli. In two separate experiments, participants interact either with a biological or a non-biological stimulus (experiment 1), or with two biological stimuli (gaze-shifts, hand movements)(experiment 2) to which they responded with two different actions (saccade, button press), either in a congruent or incongruent manner. Results from experiment 1 suggest that interacting with the biological stimulus lead to faster responses, compared to the non-biological stimulus, independent of the response type. Results from experiment 2 show longer reaction times when the depicted stimulus was not matching the response type (e.g., reacting with hand movements to a moving object or gaze-shift) compared to a matching condition, while especially the gaze-following condition (reacting with a gaze shift to a perceived gaze shift) led to the fastest responses. These results suggest that redirecting attention is not only dependent on the perceived stimulus but also on the way how those stimuli are responded to. |
Joy J. Geng; N. E. Di Quattro; Jonathan Helm Distractor probability changes the shape of the attentional template Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 12, pp. 1993–2007, 2017. @article{Geng2017, Theories of attention commonly refer to the “attentional template” as the collection of features in working memory that represent the target of visual search. Many models of attention assume that the template contains a veridical representation of target features, but recent studies have shown that the target representation is “shifted” away from distractor features in order to optimize their distinctiveness and facilitate visual search. Here, we manipulated the probability of target-similar distractors during a visual search task in 2 groups, and separately measured the contents of the attentional template. We hypothesized that having a high probability of target-similar distractors would increase pressure to shift and/or sharpen the target representation in order to increase the distinctiveness of targets from distractors. We found that the high-similarity group experienced less distractor interference during visual search, but only for highly target-similar distractors. Additionally, while both groups shifted the target representation away from the actual target color, the high-similarity group also had a sharper representation of the target color. We conclude that the contents of the attentional template in working memory can be flexibly adjusted with multiple mechanisms to increase target-to-distractor distinctiveness and optimize attentional selection. |
Edden M. Gerber; Tal Golan; Robert T. Knight; Leon Y. Deouell Cortical representation of persistent visual stimuli Journal Article In: NeuroImage, vol. 161, pp. 67–79, 2017. @article{Gerber2017, Research into visual neural activity has focused almost exclusively on onset- or change-driven responses and little is known about how information is encoded in the brain during sustained periods of visual perception. We used intracranial recordings in humans to determine the degree to which the presence of a visual stimulus is persistently encoded by neural activity. The correspondence between stimulus duration and neural response duration was strongest in early visual cortex and gradually diminished along the visual hierarchy, such that is was weakest in inferior-temporal category-selective regions. A similar posterior-anterior gradient was found within inferior temporal face-selective regions, with posterior but not anterior sites showing persistent face-selective activity. The results suggest that regions that appear uniform in terms of their category selectivity are dissociated by how they temporally represent a stimulus in support of ongoing visual perception, and delineate a large-scale organizing principle of the ventral visual stream. |
Tobias Gerstenberg; Matthew F. Peterson; Noah D. Goodman; David A. Lagnado; Joshua B. Tenenbaum Eye-tracking causality Journal Article In: Psychological Science, vol. 28, no. 12, pp. 1731–1744, 2017. @article{Gerstenberg2017, How do people make causal judgments? What role, if any, does counterfactual simulation play? Counterfactual theories of causal judgments predict that people compare what actually happened with what would have happened if the candidate cause had been absent. Process theories predict that people focus only on what actually happened, to assess the mechanism linking candidate cause and outcome. We tracked participants' eye movements while they judged whether one billiard ball caused another one to go through a gate or prevented it from going through. Both participants' looking patterns and their judgments demonstrated that counterfactual simulation played a critical role. Participants simulated where the target ball would have gone if the candidate cause had been removed from the scene. The more certain participants were that the outcome would have been different, the stronger the causal judgments. These results provide the first direct evidence for spontaneous counterfactual simulation in an important domain of high-level cognition. |
Anna C. Geuzebroek; Albert V. Berg Impaired visual competition in patients with homonymous visual field defects Journal Article In: Neuropsychologia, vol. 97, pp. 152–162, 2017. @article{Geuzebroek2017, Intense visual training can lead to partial recovery of visual field defects caused by lesions of the primary visual cortex. However, the standard visual detection and discrimination tasks, used to assess this recovery process tend to ignore the complexity of the natural visual environment, where multiple stimuli continuously interact. Visual competition is an essential component for natural search tasks and detecting unexpected events. Our study focused on visual decision-making and to what extent the recovered visual field can compete for attention with the ‘intact' visual field. Nine patients with visual field defects who had previously received visual discrimination training, were compared to healthy age-matched controls using a saccade target-selection paradigm, in which participants actively make a saccade towards the brighter of two flashed targets. To further investigate the nature of competition (feed-forward or feedback inhibition), we presented two flashes that reversed their intensity difference during the flash. Both competition between recovered visual field and intact visual field, as well as competition within the intact visual field, were assessed. Healthy controls showed the expected primacy effect; they preferred the initially brighter target. Surprisingly, choice behaviour, even in the patients' supposedly ‘intact' visual field, was significantly different from the control group for all but one. In the latter patient, competition was comparable to the controls. All other patients showed a significantly reduced preference to the brighter target, but still showed a small hint of primacy in the reversal conditions. The present results indicate that patients and controls have similar decision-making mechanisms but patients' choices are affected by a strong tendency to guess, even in the intact visual field. This tendency likely reveals slower integration of information, paired with a lower threshold. Current rehabilitation should therefore also include training focused on improving visual decision-making of the defective and the intact visual field. |
Saeideh Ghahghaei; Preeti Verghese Texture segmentation influences the spatial profile of presaccadic attention Journal Article In: Journal of Vision, vol. 17, no. 2, pp. 1–16, 2017. @article{Ghahghaei2017, Attention is important for selecting targets for action. Several studies have shown that attentional selection precedes eye movements to a target, and results in an enhanced sensitivity at the saccade goal. Typically these studies have used isolated targets on blank backgrounds, which are rare in real-world situations. Here, we examine the spatial profile of sensitivity around a saccade target on a textured background and how the influence of the surrounding context develops over time. We used two textured backgrounds: a uniform texture, and a concentric arrangement of an inner and an outer texture with orthogonal orientations. For comparison, we also measured sensitivity around the target on a blank background. The spatial profile of sensitivity was measured with a brief, dim, probe flashed around the saccade target. When the target was on a blank or a uniformly textured background, spatial sensitivity peaked near the target location around 350 ms after cue onset and declined with distance from the target. However, when the background was made up of an inner and outer texture, sensitivity to the inner texture was uniformly high, peaking at about 350 ms after cue onset, suggesting that the entire inner texture was selected along with the target. The enhancement of sensitivity on the inner texture was much smaller when observers attended the target covertly and performed the probe-detection task. Thus, our results suggest that the surface representation around the target is taken into account when an observer actively plans to interact with the target. |
Tandra Ghose; Yannik T. H. Schelske; Takeshi Suzuki; Andreas Dengel Low-level pixelated representations suffice for aesthetically pleasing contrast adjustment in photographs Journal Article In: Psihologija, vol. 50, no. 3, pp. 239–270, 2017. @article{Ghose2017, Today's web-based automatic image enhancement algorithms decide to apply an enhancement operation by searching for “similar” images in an online database of images and then applying the same level of enhancement as the image in the database. Two key bottlenecks in these systems are the storage cost for images and the cost of the search. Based on the principles of computational aesthetics, we consider storing task-relevant aesthetic summaries, a set of features which are sufficient to predict the level at which an image enhancement operation should be performed, instead of the entire image. The empirical question, then, is to ensure that the reduced representation indeed maintains enough information so that the resulting operation is perceived to be aesthetically pleasing to humans. We focus on the contrast adjustment operation, an important image enhancement primitive. We empirically study the efficacy of storing a pixelated summary of the 16 most representative colors of an image and performing contrast adjustments on this representation. We tested two variants of the pixelated image: a “mid-level pixelized version” that retained spatial relationships and allowed for region segmentation and grouping as in the original image and a “low-level pixelized-random version” which only retained the colors by randomly shuffling the 50 x 50 pixels. In an empirical study on 25 human subjects, we demonstrate that the preferred contrast for the low-level pixelized-random image is comparable to the original image even though it retains very few bits and no semantic information, thereby making it ideal for image matching and retrieval for automated contrast editing. In addition, we use an eye tracking study to show that users focus only on a small central portion of the low-level image, thus improving the performance of image search over commonly used computer vision algorithms to determine interesting key points. |
Steven M. Gillespie; Pia Rotshtein; Anthony R. Beech; Ian J. Mitchell Boldness psychopathic traits predict reduced gaze toward fearful eyes in men with a history of violence Journal Article In: Biological Psychology, vol. 128, pp. 29–38, 2017. @article{Gillespie2017, Research with developmental and adult samples has shown a relationship of psychopathic traits with reduced eye gaze. However, these relationships remained to be investigated among forensic samples. Here we examined the eye movements of male violent offenders during an emotion recognition task. Violent offenders performed similar to non-offending controls, and their eye movements varied with the emotion and intensity of the facial expression. In the violent offender group Boldness psychopathic traits, but not Meanness or Disinhibition, were associated with reduced dwell time and fixation counts, and slower first fixation latencies, on the eyes compared with the mouth. These results are the first to show a relationship of psychopathic traits with reduced attention to the eyes in a forensic sample, and suggest that Boldness is associated with difficulties in orienting attention toward emotionally salient aspects of the face. |
Mackenzie G. Glaholt; Grace Sim Gaze-contingent center-surround fusion of infrared images to facilitate visual search for human targets Journal Article In: Journal of Imaging Science and Technology, vol. 61, no. 1, pp. 230–235, 2017. @article{Glaholt2017, We investigated gaze-contingent fusion of infrared imagery during visual search. Eye movements were monitored while subjects searched for and identified human targets in images captured simultaneously in the short-wave (SWIR) and long-wave (LWIR) infrared bands. Based on the subject's gaze position, the search displaywas updated such that imagery from one sensorwas continuously presented to the subject's central visual field (“center”) and another sensor was presented to the subject's non-central visual field (“surround”). Analysis ofperformance data indicated that, compared to the other combinations, the scheme featuring SWIR imagery in the center region and LWIR imagery in the surround region constituted an optimal combination of the SWIR and LWIR information: it inherited the superior target detection performance of LWIR imagery and the superior target identification performance of SWIR imagery. This demonstrates a novel method for efficiently combining imagery from two infrared sources as an alternative to conventional image fusion. |
Hayward J. Godwin; Tamaryn Menneer; Simon P. Liversedge; Kyle R. Cave; Nick S. Holliman; Nick Donnelly Adding depth to overlapping displays can improve visual search performance Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 8, pp. 1532–1549, 2017. @article{Godwin2017, Standard models of visual search have focused upon asking participants to search for a single target in displays where the objects do not overlap one another, and where the objects are presented on a single depth plane. This stands in contrast to many everyday visual searches wherein variations in overlap and depth are the norm, rather than the exception. Here, we addressed whether presenting overlapping objects on different depths planes to one another can improve search performance. Across 4 different experiments using different stimulus types (opaque polygons, transparent polygons, opaque real-world objects, and transparent X-ray images), we found that depth was primarily beneficial when the displays were transparent, and this benefit arose in terms of an increase in response accuracy. Although the benefit to search performance only appeared in some cases, across all stimulus types, we found evidence of marked shifts in eye-movement behavior. Our results have important implications for current models and theories of visual search, which have not yet provided detailed accounts of the effects that overlap and depth have on guidance and object identification processes. Moreover, our results show that the presence of depth information could aid real-world searches of complex, overlapping displays. |
Hayward J. Godwin; Erik D. Reichle; Tamaryn Menneer Modeling lag-2 revisits to understand trade-offs in mixed control of fixation termination during visual search Journal Article In: Cognitive Science, vol. 41, no. 4, pp. 996–1019, 2017. @article{Godwin2017a, An important question about eye-movement behavior is when the decision is made to terminate a fixation and program the following saccade. Different approaches have found converging evidence in favor of a mixed-control account, in which there is some overlap between processing information at fixation and planning the following saccade. We examined one interesting instance of mixed control in visual search: lag-2 revisits, during which observers fixate a stimulus, move to a different stimulus, and then revisit the first stimulus on the next fixation. Results show that the probability of lag-2 revisits occurring increased with the number of target-similar stimuli, and revisits were preceded by a brief fixation on the intervening distractor stimulus. We developed the Efficient Visual Sampling (EVS) computational model to simulate our findings (fixation durations and fixation locations) and to provide insight into mixed control of fixations and the perceptual, cognitive, and motor processes that produce lag-2 revisits. |