All EyeLink Publications
All 12,000+ peer-reviewed EyeLink research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications library using keywords such as Visual Search, Smooth Pursuit, Parkinson’s, etc. You can also search for individual author names. Eye-tracking studies grouped by research area can be found on the solutions pages. If we missed any EyeLink eye-tracking papers, please email us!
2023 |
Felipe Luzardo; Wolfgang Einhäuser; Monique Michl; Yaffa Yeshurun Attention does not spread automatically along objects: Evidence from the pupillary light response Journal Article In: Journal of Experimental Psychology: General, vol. 152, no. 7, pp. 2040–2051, 2023. @article{Luzardo2023, Objects influence attention allocation; when a location within an object is cued, participants react faster to targets appearing in a different location within this object than on a different object. Despite consistent demonstrations of this object-based effect, there is no agreement regarding its underlying mechanisms. To test the most common hypothesis that attention spreads automatically along the cued object, we utilized a continuous, response-free measurement of attentional allocation that relies on the modulation of the pupillary light response. In Experiments 1 and 2, attentional spreading was not encouraged because the target appeared often (60%) at the cued location and considerably less often at other locations (20%within the same object and 20% on another object). In Experiment 3, spreading was encouraged because the target appeared equally often in one of the three possible locations within the cued object (cued end, middle, uncued end). In all experiments, we added gray-to-black and gray-to-white luminance gradients to the objects. By cueing the gray ends of the objects, we could track attention. If attention indeed spreads automatically along objects, then pupil size should be greater after the gray-to-dark object is cued because attention spreads toward darker areas of the object than when the gray-to-white object is cued, regardless of the target location probability. However, unequivocal evidence of attentional spreading was only found when spreading was encouraged. These findings do not support an automatic spreading of attention. Instead, they suggest that attentional spreading along the object is guided by cue–target contingencies. |
Shira M. Lupkin; Vincent B. McGinty Monkeys exhibit human-like gaze biases in economic decisions Journal Article In: eLife, vol. 12, pp. 1–27, 2023. @article{Lupkin2023, In economic decision-making individuals choose between items based on their perceived value. For both humans and nonhuman primates, these decisions are often carried out while shifting gaze between the available options. Recent studies in humans suggest that these shifts in gaze actively influence choice, manifesting as a bias in favor of the items that are viewed first, viewed last, or viewed for the overall longest duration in a given trial. This suggests a mechanism that links gaze behavior to the neural computations underlying value-based choices. In order to identify this mechanism, it is first necessary to develop and validate a suitable animal model of this behavior. To this end, we have created a novel value-based choice task for macaque monkeys that captures the essential features of the human paradigms in which gaze biases have been observed. Using this task, we identified gaze biases in the monkeys that were both qualitatively and quantita-tively similar to those in humans. In addition, the monkeys' gaze biases were well-explained using a sequential sampling model framework previously used to describe gaze biases in humans—the first time this framework has been used to assess value-based decision mechanisms in nonhuman primates. Together, these findings suggest a common mechanism that can explain gaze-related choice biases across species, and open the way for mechanistic studies to identify the neural origins of this behavior. |
Yingyi Luo; Dixiao Tan; Ming Yan Morphological structure influences saccade generation in Chinese reading Journal Article In: Reading and Writing, vol. 36, no. 5, pp. 1–17, 2023. @article{Luo2023d, Recent studies have demonstrated that saccadic programming in reading is not only determined by low-level visual factors. High-level morphological effects on saccade have been shown in two morphologically rich languages. In the present study, we examined the underlying mechanism of such morphological influences by comparing the processes of reading three-character Chinese compound words that differ in their structures in terms of morphological decomposition. Consistent with earlier reports, our results showed an effect of morphological structure on saccade. The readers' first-fixation location shifted further away from the beginning of the word, when the last two characters were more morphologically bounded and thus formed a [1 + 2] structure, than when the first two characters were more bounded (i.e., a [2 + 1] structure). The results are not accountable by a processing difficulty hypothesis, which proposes that saccade amplitude is determined by morphological complexity; rather, they suggest that Chinese readers parafoveally decompose a word and spontaneously target its longer stem, thus reflecting parafoveal access to words' stems. |
Xiaoxiao Luo; Lihui Wang; Jiayan Gu; Qiongting Zhang; Hongyu Ma; Xiaolin Zhou The benefit of making voluntary choices generalizes across multiple effectors Journal Article In: Psychonomic Bulletin & Review, pp. 1–13, 2023. @article{Luo2023c, It has been shown that cognitive performance could be improved by expressing volition (e.g., making voluntary choices), which necessarily involves the execution of action through a certain effector. However, it is unclear if the benefit of expressing volition can generalize across different effectors. In the present study, participants made a choice between two pictures either voluntarily or forcibly, and subsequently completed a visual search task with the chosen picture as a task-irrelevant background. The effector for choosing a picture could be the hand (pressing a key), foot (pedaling), mouth (commanding), or eye (gazing), whereas the effector for responding to the search target was always the hand. Results showed that participants responded faster and had a more liberal response criterion in the search task after a voluntary choice (vs. a forced choice). Importantly, the improved performance was observed regardless of which effector was used in making the choice, and regardless of whether the effector for making choices was the same as or different from the effector for responding to the search target. Eye-movement data for oculomotor choice showed that the main contributor to the facilitatory effect of voluntary choice was the post-search time in the visual search task (i.e., the time spent on processes after the target was found, such as response selection and execution). These results suggest that the expression of volition may involve the motor control system in which the effector-general, high-level processing of the goal of the voluntary action plays a key role. |
Junlian Luo; Thérèse Collins The representational similarity between visual perception and recent perceptual history Journal Article In: Journal of Neuroscience, vol. 43, no. 20, pp. 3658–3665, 2023. @article{Luo2023b, From moment to moment, the visual properties of objects in the world fluctuate because of external factors like ambient lighting, occlusion and eye movements, and internal (proximal) noise. Despite this variability in the incoming information, our perception is stable. Serial dependence, the behavioral attraction of current perceptual responses toward previously seen stimuli, may reveal a mechanism underlying stability: a spatiotemporally tuned operator that smooths over spurious fluctuations. The current study examined the neural underpinnings of serial dependence by recording the electroencephalographic (EEG) brain response of female and male human observers to prototypical objects (faces, cars, and houses) and morphs that mixed properties of two prototypes. Behavior was biased toward previously seen objects. Representational similarity analysis (RSA) revealed that responses evoked by visual objects contained information about the previous stimulus. The trace of previous representations in the response to the current object occurred immediately on object appearance, suggesting that serial dependence arises from a brain state or set that precedes processing of new input. However, the brain response to current visual objects was not representationally similar to the trace they leave on subsequent object representations. These results reveal that while past stimulus history influences current representations, this influence does not imply a shared neural code between the previous trial (memory) and the current trial (perception). |
Changlin Luo; Mengyan Zhu; Xiangling Zhuang; Guojie Ma Food word processing in Chinese reading: A study of restrained eaters Journal Article In: British Journal of Psychology, vol. 114, no. 2, pp. 476–494, 2023. @article{Luo2023a, Food-related attentional bias refers that individuals typically prioritize rewarding food-related cues (e.g. food words and food images) compared with non-food stimuli; however, the findings are inconsistent for restrained eaters. Traditional paradigms used to test food-related attentional bias, such as visual probe tasks and visual search tasks, may not directly and accurately enough to reflect individuals' food-word processing at different cognitive stages. In this study, we introduced the boundary paradigm to investigate food-word attentional bias for both restrained and unrestrained eaters. Eye movements were recorded when they performed a naturalistic sentence-reading task. The results of later-stage analyses showed that food words were fixated on for less time than non-food words, which indicated a superiority of foveal food-word processing for both restrained and unrestrained eaters. The results of early-stage analyses showed that restrained eaters spent more time on pre-target regions in the food-word valid preview conditions, which indicated a parafoveal food-word processing superiority for restrained eaters (i.e. the parafoveal-on-foveal effect). The superiority of foveal food-word processing provides new insights into explaining food-related attentional bias in general groups. Additionally, the enhanced food-word attentional bias in parafoveal processing for restrained eaters illustrates the importance of individual characteristics in studying word recognition. |
Changlin Luo; Siyuan Qiao; Xiangling Zhuang; Guojie Ma Dynamic attentional bias for pictorial and textual food cues in the visual search paradigm Journal Article In: Appetite, vol. 180, pp. 1–11, 2023. @article{Luo2023, Previous studies have found that individuals have an attentional bias for food cues, which may be related to the energy level or the type of stimulus (e.g., pictorial or textual food cues) of the food cues. However, the available evidence is inconsistent, and there is no consensus about how the type of stimulus and food energy modulate food-related attentional bias. Searching for food is one of the most important daily behaviors. In this study, a modified visual search paradigm was used to explore the attentional bias for food cues, and eye movements were recorded. Food cues consisted of both food words and food pictures with different energy levels (i.e., high- and low-calorie foods). The results showed that there was an attentional avoidance in the early stage but a later-stage attentional approach for all food cues in the pictorial condition. This was especially true for high-calorie food pictures. Participants showed a later-stage conflicting attentional bias for foods with different energy levels in the textual condition. They showed an attentional approach to high-calorie food words but an attentional avoidance of low-calorie food words. These data show that food-related attentional bias varied along with different time courses, which was also modulated by the type of stimulus and food energy. These findings regarding dynamic attentional bias could be explained using the Goal Conflict Model of eating behavior. |
Steven G. Luke; Rachel Yu Liu; Kyle Nelson; Jared Denton; Michael W. Child An ex-Gaussian analysis of eye movements in L2 reading Journal Article In: Bilingualism: Language and Cognition, vol. 26, no. 2, pp. 330–344, 2023. @article{Luke2023a, Second language learners' reading is less efficient and more effortful than native reading. However, the source of their difficulty is unclear; L2 readers might struggle with reading in a different orthography, or they might have difficulty with later stages of linguistic interpretation of the input, or both. The present study explored the source of L2 reading difficulty by analyzing the distribution of fixation durations in reading. In three studies, we observed that L2 readers experience an increase in Mu, which we interpret as indicating early orthographic processing difficulty, when the L2 has a significantly different writing system than the L1 (e.g., Chinese and English) but not when the writing systems were similar (e.g., Portuguese and English). L2 readers also experienced an increase in Tau, indicating later-arising processing difficulty which likely reflects later-stage linguistic processes, when they read for comprehension. L2 readers of Chinese also experienced an additional increase in Tau. |
Steven G. Luke; Tanner Jensen The effect of sudden-onset distractors on reading efficiency and comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 5, pp. 1195 –1206, 2023. @article{Luke2023, Reading is an essential skill that requires focused attention. However, much reading is done in non-optimal environments. These days, reading is often done on digital devices or with a digital device nearby. These devices often introduce momentary distractions during reading, interrupting with alerts, notifications, and pop-ups. In two eye-tracking experiments, we investigated how such momentary distractions affect reading. Participants read paragraphs while their eye movements were monitored. During half of the paragraphs, distractions appeared periodically on the screen that required a response from the participants. In Experiment 1, the distractions were arrows that the participant had to respond to and then could immediately forget. In Experiment 2, the participants performed a 1-back task that required them to remember the identity of the last distractor. Compared with the no-distraction condition, the respond-and-forget distractors of Experiment 1 had minimal impact on reading behaviour and comprehension, but the working-memory-load distractors of Experiment 2 led to increased rereading and decreased reading comprehension. It seems a simple pop-up does not disrupt reading, but a message you must remember will. |
Jiří Lukavský; Hauke S. Meyerhoff Gaze coherence reveals distinct tracking strategies in multiple object and multiple identity tracking Journal Article In: Psychonomic Bulletin & Review, pp. 1–10, 2023. @article{Lukavsky2023, In dynamic environments, a central task of the attentional system is to keep track of objects changing their spatial location over time. In some instances, it is sufficient to track only the spatial locations of moving objects (i.e., multiple object tracking; MOT). In other instances, however, it is also important to maintain distinct identities of moving objects (i.e., multiple identity tracking; MIT). Despite previous research, it is not clear whether MOT and MIT performance emerge from the same tracking mechanism. In the present report, we study gaze coherence (i.e., the extent to which participants repeat their gaze behaviour when tracking the same object locations twice) across repeated MOT and MIT trials. We observed more substantial gaze coherence in repeated MOT trials compared to the repeated MIT trials or mixed MOT-MIT trial pairs. A subsequent simulation study suggests that MOT is based more on a grouping mechanism than MIT, whereas MIT is based more on a target-jumping mechanism than MOT. It thus appears unlikely that MOT and MIT emerge from the same basic tracking mechanism. |
Heather D. Lucas; Ana M. Daugherty; Edward Mcauley; Arthur F. Kramer; Neal J. Cohen Supplemental material for dynamic interactions between memory and viewing behaviors: Insights from dyadic modeling of eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 49, no. 6, pp. 786–801, 2023. @article{Lucas2023, Humans use eye movements to build visual memories. We investigated how the contributions of specific viewing behaviors to memory formation evolve over individual study epochs. We used dyadic modeling to explain performance on a spatial reconstruction task based on interactions among two gaze measures: (a) the entropy of the scanpath and (b) the frequency of item-to-item gaze transitions. To measure these interactions, our hypothesized model included causal pathways by which early-trial viewing behaviors impacted subsequent memory via downstream effects on later viewing. We found that lower scanpath entropy throughout the trial predicted better memory performance. By contrast, the effect of item-to- item transition frequency changed from negative to positive as the trial progressed. The model also revealed multiple pathways by which early-trial viewing dynamically altered late-trial viewing, thereby impacting memory indirectly. Finally, individual differences in scores on an independent measure of memory ability were found to predict viewing effectiveness, and viewing behaviors partially mediated the relation between memory ability and reconstruction accuracy. In a second experiment, the model showed a good fit for an independent dataset. These results highlight the dynamic nature of memory formation and suggest that the order in which eye movements occur can critically determine their effectiveness. |
Cristina Lozano-Argüelles; Nuria Sagarra; Joseph V. Casillas Interpreting experience and working memory effects on L1 and L2 morphological prediction Journal Article In: Frontiers in Language Sciences, vol. 1, pp. 1–16, 2023. @article{LozanoArgueelles2023, The human brain tries to process information as efficiently as possible through mechanisms like prediction. Native speakers predict linguistic information extensively, but L2 learners show variability. Interpreters use prediction while working and research shows that interpreting experience mediates L2 prediction. However, it is unclear whether advantages related to interpreting are due to higher working memory (WM) capacity, a typical characteristic of professional interpreters. To better understand the role of WM during L1 and L2 prediction, English L2 learners of Spanish with and without interpreting experience and Spanish monolinguals completed a visual-world paradigm eye-tracking task and a number-letter sequencing working memory task. The eye-tracking task measured prediction of verbal morphology (present, past) based on suprasegmental information (lexical stress: paroxytone, oxytone) and segmental information (syllabic structure: CV, CVC). Results revealed that WM mediates L1 prediction, such that higher WM facilitates prediction of morphology in monolinguals. However, higher WM hinders prediction in L2 processing for non-interpreters. Interestingly, interpreters behaved similarly to monolinguals, with higher WM facilitating L2 prediction. This study provides further understanding of the variability in L2 prediction. |
Matthew W. Lowder; Adrian Zhou; Peter C. Gordon The lab discovered: Place-for-institution metonyms appearing in subject position are processed as agents Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–16, 2023. @article{Lowder2023, “Hospital” can refer to a physical place or more figuratively to the people associated with it. Such place-for- institution metonyms are common in everyday language, but there remain several open questions in the literature regarding how they are processed. The goal of the current eyetracking experiments was to investigate how metonyms are interpreted when they appear as sentence subjects in structures that are temporarily syn- tactically ambiguous versus unambiguous (e.g., “The hospital [that was] requested by the doctor…”). If com- prehenders have a bias to interpret metonyms in subject position as agents (Fishbein & Harris, 2014), they should initially access the figurative (institutional) sense of the metonym. This interpretation is rendered incorrect at the disambiguating by-phrase, which should lead to reanalysis (i.e., garden-path effects). In Experiment 1, larger garden-path effects were observed for metonyms compared to inanimate control nouns that did not have a figurative sense. In Experiment 2, garden-path effects were equivalent for metonyms and animate sentence subjects. In addition, there was some evidence that readers exhibited initial difficulty at the verb (e.g., “requested”) when it immediately followed the metonym compared to the inanimate control nouns in Experiment 1. Overall, the results suggest that the subject-as-agent heuristic is a powerful cue during sentence processing, which can prompt the comprehender to access a figurative interpretation of a metonym. |
Matthew W. Lowder; Antonio Cardoso; Michael Pittman; Adrian Zhou Effects of syntactic structure on the processing of lexical repetition during sentence reading Journal Article In: Memory & Cognition, vol. 51, no. 5, pp. 1249–1263, 2023. @article{Lowder2023a, Previous research has demonstrated that the ease or difficulty of processing complex semantic expressions depends on sentence structure: Processing difficulty emerges when the constituents that create the complex meaning appear in the same clause, whereas difficulty is reduced when the constituents appear in separate clauses. The goal of the current eye-tracking-while-reading experiments was to determine how changes to sentence structure affect the processing of lexical repetition, as this manipulation enabled us to isolate processes involved in word recognition (repetition priming) from those involved in sentence interpretation (felicity of the repetition). When repetition of the target word was felicitous (Experiment 1), we observed robust effects of repetition priming with some evidence that these effects were weaker when repetition occurred within a clause versus across a clause boundary. In contrast, when repetition of the target word was infelicitous (Experiment 2), readers experienced an immediate repetition cost when repetition occurred within a clause, but this cost was eliminated entirely when repetition occurred across clause boundaries. The results have implications for word recognition during reading, processes of semantic integration, and the role of sentence structure in guiding these linguistic representations. |
Stephanie N. Lovich; Cynthia D. King; David L. K. Murphy; Hossein Abbasi; Patrick Bruns; Christopher A. Shera; Jennifer M. Groh Conserved features of eye movement related eardrum oscillations (EMREOs) across humans and monkeys Journal Article In: Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 378, no. 1886, pp. 1–10, 2023. @article{Lovich2023, Auditory and visual information involve different coordinate systems, with auditory spatial cues anchored to the head and visual spatial cues anchored to the eyes. Information about eye movements is therefore critical for reconciling visual and auditory spatial signals. The recent discovery of eye movement-related eardrum oscillations (EMREOs) suggests that this process could begin as early as the auditory periphery. How this reconciliation might happen remains poorly understood. Because humans and monkeys both have mobile eyes and therefore both must perform this shift of reference frames, comparison of the EMREO across species can provide insights to shared and therefore important parameters of the signal. Here we show that rhesus monkeys, like humans, have a consistent, significant EMREO signal that carries parametric information about eye displacement as well as onset times of eye movements. The dependence of the EMREO on the horizontal displacement of the eye is its most consistent feature, and is shared across behavioural tasks, subjects and species. Differences chiefly involve the waveform frequency (higher in monkeys than in humans) and patterns of individual variation (more prominent in monkeys than in humans), and the waveform of the EMREO when factors due to horizontal and vertical eye displacements were controlled for. This article is part of the theme issue 'Decision and control processes in multisensory perception'. |
Sara LoTemplio; Jack Silcox; Ryan Murdock; David L. Strayer; Brennan R. Payne In: Psychophysiology, vol. 60, no. 12, pp. 1–21, 2023. @article{LoTemplio2023, Both anxiety and working memory capacity appear to predict increased (more negative) error-related negativity (ERN) amplitudes, despite being inversely related to one another. Until the interactive effects of these variables on the ERN are clarified, there may be challenges posed to our ability to use the ERN as an endophenotype for anxiety, as some have suggested. The compensatory error monitoring hypothesis suggests that high trait-anxiety individuals have larger ERN amplitudes because they must employ extra, compensatory efforts to override the working memory demands of their anxiety. Yet, to our knowledge, no ERN study has employed direct manipulation of working memory demands in conjunction with direct manipulations of induced (state) anxiety. Furthermore, little is known about how these manipulations affect other measures of error processing, such as the error-related pupil dilation response and post-error behavioral adjustments. Therefore, we manipulate working memory load and anxiety in a 2 × 2 within-subjects design to examine the interactive effects of working memory load and anxiety on ERN amplitude, error-related pupil dilation response amplitude, and post-error behavior. There were no effects of our manipulations on ERN amplitude, suggesting a strong interpretation of compensatory error-processing theory. However, our worry manipulation affected post-error behavior, such that worry caused a reduction in post-error accuracy. Additionally, our working memory manipulation affected error-related PDR magnitude and the amplitude of the error-related positivity (Pe), such that increased working memory load decreased the amplitude of these responses. Implications of these results within the context of the compensatory error processing framework are discussed. |
Priscila López-Beltrán Heritage speakers' processing of the Spanish subjunctive: A pupillometric study Journal Article In: Linguistic Approaches to Bilingualism, pp. 1–47, 2023. @article{LopezBeltran2023, We investigated linguistic knowledge of subjunctive mood in heritage speakers of Spanish who live in a long-standing English-Spanish bilingual community in Albuquerque, New Mexico. Three experiments examine the constraints on subjunctive selection. Experiment 1 and Experiment 2 employed pupillometry to investigate heritage speakers' online sensitivity to the presence of the subjunctive with non-variable governors (Lexical conditioning) and with negated governors (Structural conditioning). Experiment 3 employed an elicited production task to examine production of subjunctive in the same contexts. The findings of the heritage group were compared to those of a group ofSpanish-dominant Mexican bilinguals. Results showed that in comprehension and production, heritage speakers were as sensitive as the Spanish-dominant bilinguals to the lexical and structural factors that condition mood selection. In comprehension, the two groups experienced an increased pupillary dilation in conditions where the indicative was used but the subjunctive was expected. In addition, high- frequency governors and irregular subordinate verbs boosted participants' sensitivity to the presence of the subjunctive. In production, there were no significant differences between heritage speakers and Spanish-dominant bilinguals when producing the subjunctive with non-variable and negated governors. |
Beatriz López; Nicola Jean Gregory; Megan Freeth Social attention patterns of autistic and non-autistic adults when viewing real versus reel people Journal Article In: Autism, vol. 27, no. 8, pp. 2372–2383, 2023. @article{Lopez2023, Research consistently shows that autistic adults do not attend to faces as much as non-autistic adults. However, this conclusion is largely based on studies using pre-recorded videos or photographs as stimuli. In studies using real social scenarios, the evidence is not as clear. To explore the extent to which differences in findings relate to differences in the methodologies used across studies, we directly compared social attention of 32 autistic and 33 non-autistic adults when watching exactly the same video. However, half of the participants in each group were told simply to watch the video (Video condition), and the other half were led to believe they were watching a live webcam feed (‘Live' condition). The results yielded no significant group differences in the ‘Live' condition. However, significant group differences were found in the ‘Video' condition. In this condition, non-autistic participants, but not autistic participants, showed a marked social bias towards faces. The findings highlight the importance of studying social attention combining different methods. Specifically, we argue that studies using pre-recorded footage and studies using real people tap into separate components contributing to social attention. One that is an innate, automatic component and one that is modulated by social norms. Lay Abstract: Early research shows that autistic adults do not attend to faces as much as non-autistic adults. However, some recent studies where autistic people are placed in scenarios with real people reveal that they attend to faces as much as non-autistic people. This study compares attention to faces in two situations. In one, autistic and non-autistic adults watched a pre-recorded video. In the other, they watched what they thought were two people in a room in the same building, via a life webcam, when in fact exactly the same video in two situations. We report the results of 32 autistic adults and 33 non-autistic adults. The results showed that autistic adults do not differ in any way from non-autistic adults when they watched what they believed was people interacting in real time. However, when they thought they were watching a video, non-autistic participants showed higher levels of attention to faces than non-autistic participants. We conclude that attention to social stimuli is the result of a combination of two processes. One innate, which seems to be different in autism, and one that is influenced by social norms, which works in the same way in autistic adults without learning disabilities. The results suggest that social attention is not as different in autism as first thought. Specifically, the study contributes to dispel long-standing deficit models regarding social attention in autism as it points to subtle differences in the use of social norms rather than impairments. |
Zoe Loh; Elizabeth H. Hall; Deborah Cronin; John M. Henderson Working memory control predicts fixation duration in scene-viewing Journal Article In: Psychological Research, vol. 87, no. 4, pp. 1143–1154, 2023. @article{Loh2023, When viewing scenes, observers differ in how long they linger at each fixation location and how far they move their eyes between fixations. What factors drive these differences in eye-movement behaviors? Previous work suggests individual differences in working memory capacity may influence fixation durations and saccade amplitudes. In the present study, participants (N = 98) performed two scene-viewing tasks, aesthetic judgment and memorization, while viewing 100 photographs of real-world scenes. Working memory capacity, working memory processing ability, and fluid intelligence were assessed with an operation span task, a memory updating task, and Raven's Advanced Progressive Matrices, respectively. Across participants, we found significant effects of task on both fixation durations and saccade amplitudes. At the level of each individual participant, we also found a significant relationship between memory updating task performance and participants' fixation duration distributions. However, we found no effect of fluid intelligence and no effect of working memory capacity on fixation duration or saccade amplitude distributions, inconsistent with previous findings. These results suggest that the ability to flexibly maintain and update working memory is strongly related to fixation duration behavior. |
Yaohui Liu; Peida Zhan; Yanbin Fu; Qipeng Chen; Kaiwen Man; Yikun Luo In: Intelligence, vol. 100, pp. 1–14, 2023. @article{Liu2023h, Previous studies have found that participants use two cognitive strategies—constructive matching and response elimination—in responding to items in the Raven's Advanced Progressive Matrices (APM). This study proposed a multi-strategy psychometric model that builds on item responses and also incorporates eye-tracking measures, including but not limited to the proportional time on matrix area (PTM), the rate of toggling (ROT), and the rate of latency to first toggle (RLT). By jointly analyzing item responses and eye-tracking measures, this model can measure each participant's intelligence and identify the cognitive strategy used by each participant for each item in the APM. Several main findings were revealed from an eye-tracking-based APM study using the proposed model: (1) The effects of PTM and RLT on the constructive matching strategy selection probability were positive and higher for the former than the latter, while the effect of ROT was negligible. (2) The average intelligence of participants who used the constructive matching strategy was higher than that of participants who used the response elimination strategy, and participants with higher intelligence were more likely to use the constructive matching strategy. (3) High-intelligence participants increased their use of the constructive matching strategy as item difficulty increased, whereas low-intelligence participants decreased their use as item difficulty increased. (4) Participants took significantly less time using the constructive matching strategy than the response elimination strategy. Overall, the proposed model follows the theory-driven modeling logic and provides a new way of studying cognitive strategy in the APM by presenting quantitative results. |
Xin He Liu; Lu Gan; Zhi Ting Zhang; Pan Ke Yu; Ji Dai Probing the processing of facial expressions in monkeys via time perception and eye tracking Journal Article In: Zoological Research, vol. 44, no. 5, pp. 882–893, 2023. @article{Liu2023g, Accurately recognizing facial expressions is essential for effective social interactions. Non-human primates (NHPs) are widely used in the study of the neural mechanisms underpinning facial expression processing, yet it remains unclear how well monkeys can recognize the facial expressions of other species such as humans. In this study, we systematically investigated how monkeys process the facial expressions of conspecifics and humans using eye-tracking technology and sophisticated behavioral tasks, namely the temporal discrimination task (TDT) and face scan task (FST). We found that monkeys showed prolonged subjective time perception in response to Negative facial expressions in monkeys while showing longer reaction time to Negative facial expressions in humans. Monkey faces also reliably induced divergent pupil contraction in response to different expressions, while human faces and scrambled monkey faces did not. Furthermore, viewing patterns in the FST indicated that monkeys only showed bias toward emotional expressions upon observing monkey faces. Finally, masking the eye region marginally decreased the viewing duration for monkey faces but not for human faces. By probing facial expression processing in monkeys, our study demonstrates that monkeys are more sensitive to the facial expressions of conspecifics than those of humans, thus shedding new light on inter-species communication through facial expressions between NHPs and humans. |
Xiaoyi Liu; David Melcher The effect of familiarity on behavioral oscillations in face perception Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–15, 2023. @article{Liu2023f, Abstract: Studies on behavioral oscillations demonstrate that visual sensitivity fluctuates over time and visual processing varies periodically, mirroring neural oscillations at the same frequencies. Do these behavioral oscillations reflect fixed and relatively automatic sensory sampling, or top-down processes such as attention or predictive coding? To disentangle these theories, the current study used a dual-target rapid serial visual presentation paradigm, where participants indicated the gender of a face target embedded in streams of distractors presented at 30 Hz. On critical trials, two identical targets were presented with varied stimulus onset asynchrony from 200 to 833 ms. The target was either familiar or unfamiliar faces, divided into different blocks. We found a 4.6 Hz phase-coherent fluctuation in gender discrimination performance across both trial types, consistent with previous reports. In addition, however, we found an effect at the alpha frequency, with behavioral oscillations in the familiar blocks characterized by a faster high-alpha peak than for the unfamiliar face blocks. These results are consistent with the combination of both a relatively stable modulation in the theta band and faster modulation of the alpha oscillations. Therefore, the overall pattern of perceptual sampling in visual perception may depend, at least in part, on task demands. |
Tianyuan Liu; Bao Li; Chi Zhang; Panpan Chen; Weichen Zhao; Bin Yan Real-time classification of motor imagery using Dynamic Window-Level Granger Causality analysis of fMRI data Journal Article In: Brain Sciences, vol. 13, no. 10, pp. 1–15, 2023. @article{Liu2023e, This article presents a method for extracting neural signal features to identify the imagination of left- and right-hand grasping movements. A functional magnetic resonance imaging (fMRI) experiment is employed to identify four brain regions with significant activations during motor imagery (MI) and the effective connections between these regions of interest (ROIs) were calculated using Dynamic Window-level Granger Causality (DWGC). Then, a real-time fMRI (rt-fMRI) classification system for left- and right-hand MI is developed using the Open-NFT platform. We conducted data acquisition and processing on three subjects, and all of whom were recruited from a local college. As a result, the maximum accuracy of using Support Vector Machine (SVM) classifier on real-time three-class classification (rest, left hand, and right hand) with effective connections is 69.3%. And it is 3% higher than that of traditional multivoxel pattern classification analysis on average. Moreover, it significantly improves classification accuracy during the initial stage of MI tasks while reducing the latency effects in real-time decoding. The study suggests that the effective connections obtained through the DWGC method serve as valuable features for real-time decoding of MI using fMRI. Moreover, they exhibit higher sensitivity to changes in brain states. This research offers theoretical support and technical guidance for extracting neural signal features in the context of fMRI-based studies. |
Qing Liu; Xueyao Yang; Zekai Chen; Wenjuan Zhang Using synchronized eye movements to assess attentional engagement Journal Article In: Psychological Research, vol. 87, no. 7, pp. 2039–2047, 2023. @article{Liu2023d, The gradual emergence of online education in China in recent years requires new means of real-time monitoring and timely feedback to students. This research examines the effectiveness of synchronized eye movement assessment of attention engagement through two experiments. The first experiment used 24 university students in school as participants and made them watch the same video in high and low attentional engagement states (serial subtraction task) to compare the Inter-Subject Correlation (ISC) of participants' eye movements in different conditions. The results showed that the ISC of eye movements was significantly higher for participants in a high attentional engagement state than for participants in a low attentional engagement state. The second experiment had 26 university students in school as participants, as part of which they were made to watch video materials under the condition of having eye movement modeling examples. The results showed that the ISC of eye movements was significantly lower for participants in the group with eye movement modeling examples than those without eye movement modeling examples. However, overall test scores were significantly higher in the former than the latter. The first experiment showed that the eye movement trajectories of participants with high attentional engagement were more consistent than of those with low attentional engagement. Therefore, the ISC of participants' eye movements could be used as an objective indicator to assess and predict students' attentional conditions during online education. The second experiment showed that the eye movement modeling examples interfered with the participants' attention distribution to some extent; nevertheless, they positively affected the improvement in teaching effectiveness. Overall, the studies showed that the Inter-Subject Correlation is reliable to assess attentional engagement status in domestic online education. |
Na Liu; Di Wu; Yifan Wang; Pan Zhang; Yinling Zhang Transcranial random noise stimulation boosts early motion perception learning rather than the later performance plateau Journal Article In: Journal of Cognitive Neuroscience, vol. 35, no. 6, pp. 1021–1031, 2023. @article{Liu2023c, The effect of transcranial random noise stimulation (tRNS) on visual perceptual learning has only been investigated during early training sessions, and the influence of tRNS on later performance is unclear. We engaged participants first in 8 days of training to reach a plateau (Stage 1) and then in continued training for 3 days (Stage 2). In the first group, tRNS was applied to visual areas of the brain while participants were trained on a coherent motion direction identification task over a period of 11 days (Stage 1 + Stage 2). In the second group, participants completed an 8-day training period without any stimulation to reach a plateau (Stage 1); after that, they continued training for 3 days, during which tRNS was administered (Stage 2). In the third group, participants completed the same training as the second group, but during Stage 2, tRNS was replaced by sham stimulation. Coherence thresholds were measured three times: before training, after Stage 1, and after Stage 2. Compared with sham simulation, tRNS did not improve coherence thresholds during the plateau period. The comparison of learning curves between the first and third groups showed that tRNS decreased thresholds in the early training stage, but it failed to improve plateau thresholds. For the second and third groups, tRNS did not further enhance plateau thresholds after the continued 3-day training period. In conclusion, tRNS facilitated visual perceptual learning in the early stage, but its effect disappeared as the training continued. |
Dongyu Liu; Haibo Yang The improvement of attentional bias in individuals with problematic smartphone use through cognitive reappraisal: An eye-tracking study Journal Article In: Current Psychology, pp. 1–11, 2023. @article{Liu2023b, Attentional bias toward smartphone-related stimuli can intensify Problematic Smartphone Use (PSU) behaviors. The main objective of this study was to investigate the impact of Cognitive Reappraisal (CR) on the attentional bias of individuals with PSU. Twenty-five individuals with PSU (PSUG) and 25 Control Group (CG) participants were recruited and screened using the Smartphone Addiction Scale. The dot-probe paradigm was used in conjunction with eye-tracking technology to examine the CR effect on attentional bias toward smartphone icon stimuli. Under non-reappraisal conditions, the first fixation duration on smartphone icon stimuli was significantly longer than that on neutral stimuli in the PSUG but not the CG. No other expected eye-tracking measures were significant. Additionally, the craving for smartphone icon stimuli of the PSUG was significantly higher than that of the CG under the non-reappraisal condition but not under the reappraisal condition. The findings indicated that CR improves the first fixation duration of attentional bias toward smartphone icon stimuli in the PSUG. This effect may be attributed to CR's ability to reduce cravings for smartphone stimuli and enhance the inhibition capacity. The results of this study could guide interventions for treating PSU and provide theoretical support for such treatment. |
Chi-Hung Liu; Chun-Wei Chang; June Hung; John J. H. Lin; Pi-Shan Sung; Li-Ang Lee; Cheng-Ting Hsiao; Yi-Ping Chao; Elaine Shinwei Huang; Shu-Ling Wang Brain computed tomography reading of stroke patients by resident doctors from different medical specialities: An eye-tracking study Journal Article In: Journal of Clinical Neuroscience, vol. 117, no. 88, pp. 173–180, 2023. @article{Liu2023a, Background: Using the eye-tracking technique, our work aimed to examine whether difference in clinical background may affect the training outcome of resident doctors' interpretation skills and reading behaviour related to brain computed tomography (CT). Methods: Twelve resident doctors in the neurology, radiology, and emergency departments were recruited. Each participant had to read CT images of the brain for two cases. We evaluated each participant's accuracy of lesion identification. We also used the eye-tracking technique to assess reading behaviour. We recorded dwell times, fixation counts, run counts, and first-run dwell times of target lesions to evaluate visual attention. Transition entropy was applied to assess the temporal relations and spatial dynamics of systematic image reading. Results: The eye-tracking results showed that the image reading sequence examined by transition entropy was comparable among resident doctors from different medical specialties (p = 0.82). However, the dwell time of the target lesions was shorter for the resident doctors from the neurology department (4828.63 ms |
Chia-Yu Liu; Chao-Jung Wu Effects of working memory and relevant knowledge on reading texts and infographics Journal Article In: Reading and Writing, vol. 36, no. 162, pp. 2319–2343, 2023. @article{Liu2023i, Infographics are a new type of reading material comprising textual and visual information that has been used worldwide. Nonetheless, there has been limited research investigating people's infographic-reading performance and the characteristics of superior readers. This study adopted Chinese texts and infographics as materials and employed eye-tracking technology to assess how working memory and relevant knowledge affected 137 college students' reading comprehension, as indicated by reading accuracy (ACC), and reading efficiency, which in turn was indicated by reading time (RT) and total fixation duration (TFD). For texts, verbal working memory (VWM) exhibited no effects on individuals' reading performance; visuospatial working memory (VSWM) exerted positive effects on both ACC and TFD, and participants with higher knowledge demonstrated better ACC. For infographics, higher-VWM participants showed greater ACC, and higher-VSWM participants displayed a longer RT and TFD, though the effect of knowledge was limited. Moreover, a significant interaction effect of VWM and relevant knowledge on the TFD of infographics was observed, indicating that individuals' prior knowledge or experience might structure schemas in an infographic and then act with VWM to accelerate reading speed. This study improves our understanding of how working memory and relevant knowledge impact the processing of materials with different synthesized levels, and its implications for instruction and research are discussed. |
Baiwei Liu; Anna C. Nobre; Freek Ede Microsaccades transiently lateralise EEG alpha activity Journal Article In: Progress in Neurobiology, vol. 224, pp. 1–9, 2023. @article{Liu2023, The lateralisation of 8–12 Hz alpha activity is a canonical signature of human spatial cognition that is typically studied under strict fixation requirements. Yet, even during attempted fixation, the brain produces small involuntary eye movements known as microsaccades. Here we report how spontaneous microsaccades – made in the absence of incentives to look elsewhere – can themselves drive transient lateralisation of EEG alpha power according to microsaccade direction. This transient lateralisation of posterior alpha power occurs similarly following start and return microsaccades and is, at least for start microsaccades, driven by increased alpha power ipsilateral to microsaccade direction. This reveals new links between spontaneous microsaccades and human electrophysiological brain activity. It highlights how microsaccades are an important factor to consider in studies relating alpha activity – including spontaneous fluctuations in alpha activity – to spatial cognition, such as studies on visual attention, anticipation, and working memory. |
John P. Liska; Declan P. Rowley; Trevor T. K. Nguyen; Jens-Oliver Muthmann; Daniel A. Butts; Jacob L. Yates; Alexander C. Huk Running modulates primate and rodent visual cortex differently Journal Article In: eLife, vol. 12, no. 415, pp. 1–30, 2023. @article{Liska2023, When mice run, activity in their primary visual cortex (V1) is strongly modulated. This observation has altered conception of a brain region assumed to be a passive image processor. Extensive work has followed to dissect the circuits and functions of running-correlated modulation. However, it remains unclear whether visual processing in primates might similarly change during locomotion. We measured V1 activity in marmosets while they viewed stimuli on a treadmill. In contrast to mouse V1, marmoset V1 was slightly but reliably suppressed during running. Population-level analyses revealed trial-to-trial fluctuations of shared gain across V1 in both species, but these gain modulations were smaller and more often negatively correlated with running in marmosets. Thus, population-scale gain fluctuations of V1 reflect a common feature of mammalian visual cortical function, but important quantitative differences yield distinct consequences for the relation between vision and action in primates versus rodents. |
Juan Linde-Domingo; Bernhard Spitzer Geometry of visuospatial working memory information in miniature gaze patterns Journal Article In: Nature Human Behaviour, pp. 1–16, 2023. @article{LindeDomingo2023, Stimulus-dependent eye movements have been recognized as a potential confound in decoding visual working memory information from neural signals. Here we combined eye-tracking with representational geometry analyses to uncover the information in miniature gaze patterns while participants (n = 41) were cued to maintain visual object orientations. Although participants were discouraged from breaking fixation by means of real-time feedback, small gaze shifts (<1°) robustly encoded the to-be-maintained stimulus orientation, with evidence for encoding two sequentially presented orientations at the same time. The orientation encoding on stimulus presentation was object-specific, but it changed to a more object-independent format during cued maintenance, particularly when attention had been temporarily withdrawn from the memorandum. Finally, categorical reporting biases increased after unattended storage, with indications of biased gaze geometries already emerging during the maintenance periods before behavioural reporting. These findings disclose a wealth of information in gaze patterns during visuospatial working memory and indicate systematic changes in representational format when memory contents have been unattended. |
Jaeseob Lim; Sang-Hun Lee Spatial correspondence in relative space regulates serial dependence Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–11, 2023. @article{Lim2023, Our perception is often attracted to what we have seen before, a phenomenon called ‘serial dependence.' Serial dependence can help maintain a stable perception of the world, given the statistical regularity in the environment. If serial dependence serves this presumed utility, it should be pronounced when consecutive elements share the same identity when multiple elements spatially shift across successive views. However, such preferential serial dependence between identity-matching elements in dynamic situations has never been empirically tested. Here, we hypothesized that serial dependence between consecutive elements is modulated more effectively by the spatial correspondence in relative space than by that in absolute space because spatial correspondence in relative coordinates can warrant identity matching invariantly to changes in absolute coordinates. To test this hypothesis, we developed a task where two targets change positions in unison between successive views. We found that serial dependence was substantially modulated by the correspondence in relative coordinates, but not by that in absolute coordinates. Moreover, such selective modulation by the correspondence in relative space was also observed even for the serial dependence defined by previous non-target elements. Our findings are consistent with the view that serial dependence subserves object-based perceptual stabilization over time in dynamic situations. |
Agnieszka Lijewska The influence of semantic bias on triple non-identical cognates during reading: Evidence from trilinguals' eye movements Journal Article In: Second Language Research, vol. 39, no. 4, pp. 1235 –1263, 2023. @article{Lijewska2023, The current study investigated how the processing of triple cognates (words sharing form and meaning across three languages) is modulated by the semantic bias of sentence context in a reading task. In the study, Polish–German–English trilinguals read English sentences while their eye movements were monitored. The sentences were either semantically biased (high-context) or neutral (low-context) towards target words. The targets were either Polish–German–English cognates whose cross-language form overlap was incomplete (e.g. DIAMENT–DIAMANT–DIAMOND) or English-only controls (e.g. KURCZAK–HÄHNCHEN–CHICKEN). The results revealed a significant effect of context in gaze durations and in total reading time. Importantly, no cognate facilitation effect was identified in any reading measure. The gaze duration data additionally revealed that English-only controls were read slower in low-context sentences than in high-context sentences but gaze durations for cognates were not affected by the sentence context. Thus, prior bilingual findings were only partially replicated in the current study with trilinguals. This suggests that bilingual models of language processing should be carefully adapted to trilinguals. The current data may also mean that non-identical cognates (even those shared across three languages) induce relatively small effects and large samples of participants and items may be needed to detect such effects across reading measures. |
Justin D. Lieber; Gerick M. Lee; Najib J. Majaj; J. Anthony Movshon Sensitivity to naturalistic texture relies primarily on high spatial frequencies Journal Article In: Journal of Vision, vol. 23, no. 2, pp. 1–25, 2023. @article{Lieber2023, Natural images contain information at multiple spatial scales. Though we understand how early visual mechanisms split multiscale images into distinct spatial frequency channels, we do not know how the outputs of these channels are processed further by mid-level visual mechanisms.We have recently developed a texture discrimination task that uses synthetic, multi-scale, “naturalistic” textures to isolate these mid-level mechanisms. Here, we use three experimental manipulations (image blur, image rescaling, and eccentric viewing) to show that perceptual sensitivity to naturalistic structure is strongly dependent on features at high object spatial frequencies (measured in cycles/image). As a result, sensitivity depends on a texture acuity limit, a property of the visual system that sets the highest retinal spatial frequency (measured in cycles/degree) at which observers can detect naturalistic features. Analysis of the texture images using a model observer analysis shows that naturalistic image features at high object spatial frequencies carry more task-relevant information than those at low object spatial frequencies. That is, the dependence of sensitivity on high object spatial frequencies is a property of the texture images, rather than a property of the visual system. Accordingly, we find human observers' ability to extract naturalistic information (their efficiency) is similar for all object spatial frequencies.We conclude that the mid-level mechanisms that underlie perceptual sensitivity effectively extract information from all image features below the texture acuity limit, regardless of their retinal and object spatial frequency. |
Ming-Ray Liao; Andy J. Kim; Brian A. Anderson Neural correlates of value-driven spatial orienting Journal Article In: Psychophysiology, vol. 60, no. 9, pp. 1–13, 2023. @article{Liao2023, Reward learning has been shown to habitually guide overt spatial attention to specific regions of a scene. However, the neural mechanisms that support this bias are unknown. In the present study, participants learned to orient themselves to a particular quadrant of a scene (a high-value quadrant) to maximize monetary gains. This learning was scene-specific, with the high-value quadrant varying across different scenes. During a subsequent test phase, participants were faster at identifying a target if it appeared in the high-value quadrant (valid), and initial saccades were more likely to be made to the high-value quadrant. fMRI analyses during the test phase revealed learning-dependent priority signals in the caudate tail, superior colliculus, frontal eye field, anterior cingulate cortex, and insula, paralleling findings concerning feature-based, value-driven attention. In addition, ventral regions typically associated with scene selection and spatial information processing, including the hippocampus, parahippocampal gyrus, and temporo-occipital cortex, were also implicated. Taken together, our findings offer new insights into the neural architecture subserving value-driven attention, both extending our understanding of nodes in the attention network previously implicated in feature-based, value-driven attention and identifying a ventral network of brain regions implicated in reward's influence on scene-dependent spatial orienting. |
Hsin-I Liao; Haruna Fujihira; Shimpei Yamagishi; Yung-Hao Yang; Shigeto Furukawa Seeing an auditory object: Pupillary light response reflects covert attention to auditory space and object Journal Article In: Journal of Cognitive Neuroscience, vol. 35, no. 2, pp. 276–290, 2023. @article{Liao2023a, Attention to the relevant object and space is the brain's strat-egy to effectively process the information of interest in complex environments with limited neural resources. Numerous studies have documented how attention is allocated in the visual domain, whereas the nature of attention in the auditory domain has been much less explored. Here, we show that the pupillary light response can serve as a physiological index of auditory attentional shift and can be used to probe the relationship between space-based and object-based attention as well. Experiments demonstrated that the pupillary response corresponds to the luminance condition where the attended auditory object (e.g., spoken sentence) was located, regardless of whether attention was directed by a spatial (left or right) or nonspatial (e.g., the gender of the talker) cue and regardless of whether the sound was presented via headphones or loudspeakers. These effects on the pupillary light response could not be accounted for as a consequence of small (although observable) biases in gaze position drifting. The overall results imply a uni-fied audiovisual representation of spatial attention. Auditory object-based attention contains the space representation of the attended auditory object, even when the object is oriented without explicit spatial guidance. |
Junhao Liang; Severin Maher; Li Zhaoping Eye movement evidence for the V1 Saliency Hypothesis and the Central-peripheral Dichotomy theory in an anomalous visual search task Journal Article In: Vision Research, vol. 212, pp. 1–14, 2023. @article{Liang2023a, Typically, searching for a target among uniformly tilted non-targets is easier when this target is perpendicular, rather than parallel, to the non-targets. The V1 Saliency Hypothesis (V1SH) – that V1 creates a saliency map to guide attention exogenously – predicts exactly the opposite in a special case: each target or non-target is a pair of equally-sized disks, a homo-pair of two disks of the same color, black or white, or a hetero-pair of two disks of the opposite color; the inter-disk displacement defines its orientation. This prediction – parallel advantage – was supported by the finding that parallel targets require shorter reaction times (RTs) to report targets' locations. Furthermore, it is stronger for targets further from the center of search images, as predicted by the Central-peripheral Dichotomy (CPD) theory entailing that saliency effects are stronger in peripheral than in central vision. However, the parallel advantage could arise from a shorter time required to recognize – rather than to shift attention to – the parallel target. By gaze tracking, the present study confirms that the parallel advantage is solely due to the RTs for the gaze to reach the target. Furthermore, when the gaze is sufficiently far from the target during search, saccade to a parallel, rather than perpendicular, target is more likely, demonstrating the Central-peripheral Dichotomy more directly. Parallel advantage is stronger among observers encouraged to let their search be guided by spontaneous gaze shifts, which are presumably guided by bottom-up saliency rather than top-down factors. |
Guangsheng Liang; John E. Poquiz; Miranda Scolari Space- and feature-based attention operate both independently and interactively within latent components of perceptual decision making Journal Article In: Journal of Vision, vol. 23, no. 1, pp. 1–17, 2023. @article{Liang2023, Top-down visual attention filters undesired stimuli while selected information is afforded the lion's share of limited cognitive resources. Multiple selection mechanisms can be deployed simultaneously, but how unique influences of each combine to facilitate behavior remains unclear. Previously, we failed to observe an additive perceptual benefit when both space-based attention (SBA) and feature-based attention (FBA) were cued in a sparse display (Liang & Scolari, 2020): FBA was restricted to higher order decision-making processes when combined with a valid spatial cue, whereas SBA additionally facilitated target enhancement. Here, we introduced a series of design modifications across three experiments to elicit both attention mechanisms within signal enhancement while also investigating the impacts on decision making. First, we found that when highly reliable spatial and feature cues made unique contributions to search (experiment 1), or when each cue component was moderately reliable (experiments 2a and 2b), both mechanisms were deployed independently to resolve the target. However, the same manipulations produced interactive attention effects within other latent decision-making components that depended on the probability of the integrated cueing object. Time spent before evidence accumulation was reduced and responses were more conservative for the most likely pre-cue combination—even when it included an invalid component. These data indicate that selection mechanisms operate on sensory signals invariably in an independent manner, whereas a higher-order dependency occurs outside of signal enhancement. |
Feifei Liang; Qi Gao; Xin Li; Yongsheng Wang; Xuejun Bai; Simon P. Liversedge In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 49, no. 1, pp. 98–115, 2023. @article{Liang2023b, Word spacing is important in guiding eye movements during spaced alphabetic reading. Chinese is unspaced and it remains unclear as to how Chinese readers segment and identify words in reading. We conducted two parallel experiments to investigate whether the positional probabilities of the initial and the final characters of a multicharacter word affected word segmentation and identification in Chinese reading. Two-character words were selected as targets. In Experiment 1, the initial character's positional probability was manipulated as being either high or low, and the final character was kept identical across the two conditions. In Experiment 2, an analogous manipulation was made for the final character of the target word. We recorded adults' and children's eye movements when they read sentences containing these words. In Experiment 1, reading times on targets did not differ in the two conditions for both children and adults, providing no evidence that a word initial character's positional probability contributes to word segmentation. In Experiment 2, adults had shorter reading times and made fewer refixations on targets that comprised final characters with high relative to low positional probabilities; a similar effect was observed in children, but this effect had a slower time course. The results demonstrate that the positional probability of the final (but not the initial) character of a word influences segmentation commitments in reading. It suggests that Chinese readers identify where a currently fixated word ends, and via this commitment, by default, they identify where the subsequent word begins |
Yutong Li; Hanwen Shi; Shan Li; Lei Gao; Xiaolei Gao The adjustment of complexity on sarcasm processing in Chinese: Evidence from reading time indicators Journal Article In: Brain Sciences, vol. 13, no. 2, pp. 1–13, 2023. @article{Li2023m, It is controversial whether sarcasm processing should go through literal meaning processing. There is also a lack of eye movement evidence for Chinese sarcasm processing. In this study, we used eye movement experiments to explore the processing differences between sarcastic and literal meaning in Chinese text and whether this was regulated by sentence complexity. We manipulated the variables of complexity and literality. We recorded 33 participants' eye movements when they were reading Chinese text and the results were analyzed by a linear mixed model. We found that, in the early stage of processing, there was no difference between the processing time of the sarcastic meaning and the literal meaning of simple remarks, whereas for complex remarks, the time needed to process the sarcastic meaning was longer than that needed to process the literal meaning. In the later stage of processing, regardless of complexity, the processing time of the sarcastic meaning was longer than that of the literal meaning. These results suggest that sarcastic speech processing in Chinese is influenced by literal meaning, and the effect of literal meaning on sarcastic remarks is regulated by complexity. Sarcastic meaning was expressed differently in different stages of processing. These results support the hierarchical salience hypothesis of the serial modular model. |
Yongkai Li; Shuai Zhang; Gancheng Zhu; Zehao Huang; Rong Wang; Xiaoting Duan; Zhiguo Wang A CNN-based wearable system for driver drowsiness detection Journal Article In: Sensors, vol. 23, no. 7, 2023. @article{Li2023l, Drowsiness poses a serious challenge to road safety and various in-cabin sensing technologies have been experimented with to monitor driver alertness. Cameras offer a convenient means for contactless sensing, but they may violate user privacy and require complex algorithms to accommodate user (e.g., sunglasses) and environmental (e.g., lighting conditions) constraints. This paper presents a lightweight convolution neural network that measures eye closure based on eye images captured by a wearable glass prototype, which features a hot mirror-based design that allows the camera to be installed on the glass temples. The experimental results showed that the wearable glass prototype, with the neural network in its core, was highly effective in detecting eye blinks. The blink rate derived from the glass output was highly consistent with an industrial gold standard EyeLink eye-tracker. As eye blink characteristics are sensitive measures of driver drowsiness, the glass prototype and the lightweight neural network presented in this paper would provide a computationally efficient yet viable solution for real-world applications. |
Yijing Li; Xiangling Zhuang; Guojie Ma Use of minimal working memory in visual comparison: An eye-tracking study Journal Article In: Perception, vol. 52, no. 11-12, pp. 759–773, 2023. @article{Li2023k, In this study, we used a novel application of the previous paradigm provided by Pomplun to examine the eye movement strategies of using minimal working memory in visual comparison. This paradigm includes two tasks: one is a free comparison and the other is a single sequential comparison. In the free comparison, participants can freely view two horizontally presented stimuli until they judge whether the two stimuli are the same or not. In the single sequential comparison, participants can only view the left-side stimuli one time, and when their eyes cross the invisible boundary at the center of the screen, the left-side stimuli disappear and the right-side stimuli appear. Participants need to judge whether the right-side stimuli are the same as the disappeared left-side stimuli. Eye movement data showed significant differences between the single sequential comparison and free comparison tasks that suggests the use of minimal working memory in free comparison. Moreover, when the number of items was more than three, an average of 2.87 items would be processed in each view sequence. Participants also used the alternating left-right reference strategy that made the shortest scan path with the use of minimal working memory. The typical eye movement strategy in visual comparison and its theoretical significance were discussed. |
Xinjing Li; Qingqing Qu Verbal working memory capacity modulates semantic and phonological prediction in spoken comprehension Journal Article In: Psychonomic Bulletin & Review, pp. 1–10, 2023. @article{Li2023j, Mounting evidence suggests that people may use multiple cues to predict different levels of representation (e.g., semantic, syntactic, and phonological) during language comprehension. One question that has been less investigated is the relationship between general cognitive processing and the efficiency of prediction at various linguistic levels, such as semantic and phonological levels. To address this research gap, the present study investigated how working memory capacity (WMC) modulates different kinds of prediction behavior (i.e., semantic prediction and phonological prediction) in the visual world. Chinese speakers listened to the highly predictable sentences that contained a highly predictable target word, and viewed a visual display of objects. The visual display of objects contained a target object corresponding to the predictable word, a semantic or a phonological competitor that was semantically or phonologically related to the predictable word, and an unrelated object. We conducted a Chinese version of the reading span task to measure verbal WMC and grouped participants into high- and low-span groups. Participants showed semantic and phonological prediction with comparable size in both groups during language comprehension, with earlier semantic prediction in the high-span group, and a similar time course of phonological prediction in both groups. These results suggest that verbal working memory modulates predictive processing in language comprehension. |
Tianyuan Li; Pok-Man Man Siu State relationship orientation and helping behaviors: The influence of hunger and trait relationship orientations Journal Article In: Current Psychology, vol. 42, no. 31, pp. 27317–27330, 2023. @article{Li2023n, Exchange orientation (EO) and communal orientation (CO) are two fundamental relationship orientations (ROs). We argue that state RO (i.e., the relative activation of the two ROs at a specific moment) varies across situations and should be differentiated from trait ROs. In two studies, we examined how state RO affected subsequent helping behaviors and how it was influenced by a situational factor (i.e., hunger). We also examined whether trait ROs moderated the above links. An eye-tracking paradigm (Study 1) and a scenario-based paradigm (Study 2) were adopted to assess state RO. The two studies consistently found that relatively more activation of state EO over state CO reduced helping tendency toward strangers (Study 1) and acquaintances (Study 2). High trait CO amplified the effect in Study 1. Moreover, hunger heightened the relative activation of state EO over state CO in both studies, but the effect was only significant for participants with high trait EO in Study 1. The results highlight the importance to study the momentary variation of ROs and open new research directions. |
Siyi Li; Xuemei Zeng; Zhujun Shao; Qing Yu Neural representations in visual and parietal cortex differentiate between imagined, perceived, and illusory experiences Journal Article In: Journal of Neuroscience, vol. 43, no. 38, pp. 6508–6524, 2023. @article{Li2023i, Humans constantly receive massive amounts of information, both perceived from the external environment and imagined from the internal world. To function properly, the brain needs to correctly identity the origin of information being processed. Recent work has suggested common neural substrates for perception and imagery. However, it has remained unclear how the brain differentiates between external and internal experiences with shared neural codes. Here we tested this question in human participants (male and female) by systematically investigating the neural processes underlying the generation and maintenance of visual information from voluntary imagery, veridical perception, and illusion. The inclusion of illusion allowed us to differentiate between objective and subjective internality: while illusion has an objectively internal origin and can be viewed as involuntary imagery, it is also subjectively perceived as having an external origin like perception. Combining fMRI, eye-tracking, multivariate decoding and encoding approaches, we observed superior orientation representations in parietal cortex during imagery compared to perception, and conversely in early visual cortex. This imagery dominance gradually developed along a posterior-to-anterior cortical hierarchy from early visual to parietal cortex, emerged in the early epoch of imagery and sustained into the delay epoch, and persisted across varied imagined contents. Moreover, representational strength of illusion was more comparable to imagery in early visual cortex, but more comparable to perception in parietal cortex, suggesting content-specific representations in parietal cortex differentiate between subjectively internal and external experiences, as opposed to early visual cortex. These findings together support a domain-general engagement ofparietal cortex in internally-generated experience. |
Shuang Li; Xiuhong Tong; Wei Shena Influence of lexical tone similarity on spoken word recognition in Mandarin Chinese: Evidence from eye tracking Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 66, no. 9, pp. 3453–3472, 2023. @article{Li2023h, Purpose: Using the visual world paradigm with the eye-tracking technique, this study examined the extent to which lexical tone similarity influences spoken word recognition. Method: In two experiments, participants were audibly presented with a target word and visually presented with the same target word, a tonal competitor, and two distractors, and they were required to identify the target word. In Experiment 1, the two tonal competitors shared either acoustically highly similar tones (e.g., target word: /yang2tai2/, “balcony” vs. competitor: /yang3zi3/, “adopted son”) or acoustically lowly similar tones (e.g., target word: /yang2tai2/, “balcony” vs. competitor:/yang4ben3/, “sample”). In Experiment 2, the acoustic similarity of the target words and the tonal competitors shared either acoustically highly similar tones or acoustically lowly similar tones or identical tones (e.g., target word: /yang2tai2/, “balcony” vs. competitor: /yang2mao2/, “wool”). Results: The results of the two experiments consistently demonstrated a graded tonal competitor effect, in which acoustically highly similar tonal competitors attracted more visual attention than acoustically lowly similar tonal competitors. Conclusion: Tonal similarity plays a graded constraining role in spoken word recognition in Mandarin Chinese. |
Qian Li A preliminary study on the online processing of anticipatory tonal coarticulation – Evidence from eye movements Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–8, 2023. @article{Li2023g, While the f0 realization of lexical tones vary extensively in contexts, little has been known on how listeners process the variation in lexical tones due to contextual effects such as tonal coarticulation in spoken word recognition. This study thus aims to fill the knowledge gap in tone perception with evidence from two types of anticipatory tonal coarticulation effects in Tianjin Mandarin, i.e., the slope raising effect due to a following low-falling tone and the overall-height raising effect due to a following low-dipping tone. An eye-tracking experiment with the Visual World Paradigm was carried out to compare participants' eye movements when they heard targets in three types of anticipatory raising conditions, i.e., the Slope Raising condition, the Overall-height Raising condition, as well as the No Raising condition (the baseline). The eye movement results showed significant differences in the proportion of looks to target between the Slope Raising condition versus the other two conditions, whereas the Overall-height Raising condition did not differ significantly from the No Raising condition. The findings thus suggest the facilitatory effect of tonal coarticulation cues in the anticipation of the upcoming tones, but listeners in this study seemed to be only sensitive to the raising in the f0 slope rather than the overall raising in the f0 height. |
Nan Li; Dongxia Sun; Suiping Wang Semantic preview effect of relatedness and plausibility in reading Chinese: Evidence from high constraint sentences Journal Article In: Reading and Writing, vol. 36, no. 5, pp. 1319–1338, 2023. @article{Li2023o, In natural reading, the processing of words in fixation is influenced by semantic information obtained through preview (i.e., the semantic preview effect). Previous studies have confirmed that two types of semantic information exhibit the semantic preview effect: semantic association, which is reflected by the semantic relationship between preview words and target words, and semantic integration, which is reflected by the plausibility of preview words in sentences. This study examined whether and how these two types of semantic preview information interact to influence readers' processing of words in the fovea. We referenced high constraint sentences, in which contextual information strongly limits the possible meanings, and the meaning of the target word can be activated before the target word becomes fixated. Thus, the meaning of the target word can be obtained at least as early as when the pretarget word becomes fixated. Therefore, by creating a high constraint context, the reader can obtain the meaning of the upcoming word both through preview and through preactivation within the same preview time window. We tested the preview effect of semantic relatedness when preview words were implausible (Experiment 1) and plausible (Experiment 2). Readers' eye movements were measured. The results showed that the preview effect (shortened fixation duration) of semantic relatedness appeared only when the preview word was plausible. This finding suggests that readers can use semantic information from different sources within the same preview time window and that message-level representations play an immediate and pivotal role in this process. |
Na Li; Junsheng Liu; Yong Xie; Weidong Ji; Zhongting Chen Age-related decline of online visuomotor adaptation: A combined effect of deteriorations of motor anticipation and execution Journal Article In: Frontiers in Aging Neuroscience, vol. 15, pp. 1–17, 2023. @article{Li2023f, The literature has established that the capability of visuomotor adaptation decreases with aging. However, the underlying mechanisms of this decline are yet to be fully understood. The current study addressed this issue by examining how aging affected visuomotor adaptation in a continuous manual tracking task with delayed visual feedback. To distinguish separate contributions of the declined capability of motor anticipation and deterioration of motor execution to this age-related decline, we recorded and analyzed participants' manual tracking performances and their eye movements during tracking. Twenty-nine older people and twenty-three young adults (control group) participated in this experiment. The results showed that the age-related decline of visuomotor adaptation was strongly linked to degraded performance in predictive pursuit eye movement, indicating that declined capability motor anticipation with aging had critical influences on the age-related decline of visuomotor adaptation. Additionally, deterioration of motor execution, measured by random error after controlling for the lag between target and cursor, was found to have an independent contribution to the decline of visuomotor adaptation. Taking these findings together, we see a picture that the age-related decline of visuomotor adaptation is a joint effect of the declined capability of motor anticipation and the deterioration of motor execution with aging. |
Hui Li; Xiaolu Wang; Kevin B. Paterson; Hua Zhang; Degao Li Is there a processing advantage for verb-noun collocations in Chinese reading? Evidence from eye movements during reading Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–12, 2023. @article{Li2023e, A growing number of studies show a processing advantage for collocations, which are commonly-used juxtapositions of words, such as “joint effort” or “shake hands,” suggesting that skilled readers are keenly perceptive to the occurrence of two words in phrases. With the current research, we report two experiments that used eye movement measures during sentence reading to explore the processing of four-character verb-noun collocations in Chinese, such as 修改文章 (“revise the article”). Experiment 1 compared the processing of these collocations relative to similar four-character expressions that are not collocations (e.g., 修改结尾, “revise the ending”) in neutral contexts and contexts in which the collocation was predictable from the preceding sentence context. Experiment 2 further examined the processing of these four-character collocations, by comparing eye movements for commonly-used “strong” collocations, such as 保护环境 (“protect the environment”), as compared to less commonly-used “weak” collocations, such as 保护自然 (“protect nature”), again in neutral contexts and contexts in which the collocations were highly predictable. The results reveal a processing advantage for both collocations relative to novel expressions, and for “strong” collocations relative to “weak” collocations, which was independent of effects of contextual predictability. We interpret these findings as providing further evidence that readers are highly sensitive to the frequency that words co-occur as a phrase in written language, and that a processing advantage for collocations occurs independently of contextual expectations. |
Hsin-Hung Li; Clayton E. Curtis Neural population dynamics of human working memory Journal Article In: Current Biology, vol. 33, no. 17, pp. 3775–3784, 2023. @article{Li2023d, The activity of neurons in macaque prefrontal cortex (PFC) persists during working memory (WM) delays, providing a mechanism for memory.1,2,3,4,5,6,7,8,9,10,11 Although theory,11,12 including formal network models,13,14 assumes that WM codes are stable over time, PFC neurons exhibit dynamics inconsistent with these assumptions.15,16,17,18,19 Recently, multivariate reanalyses revealed the coexistence of both stable and dynamic WM codes in macaque PFC.20,21,22,23 Human EEG studies also suggest that WM might contain dynamics.24,25 Nonetheless, how WM dynamics vary across the cortical hierarchy and which factors drive dynamics remain unknown. To elucidate WM dynamics in humans, we decoded WM content from fMRI responses across multiple cortical visual field maps.26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48 We found coexisting stable and dynamic neural representations of WM during a memory-guided saccade task. Geometric analyses of neural subspaces revealed that early visual cortex exhibited stronger dynamics than high-level visual and frontoparietal cortex. Leveraging models of population receptive fields, we visualized and made the neural dynamics interpretable. We found that during WM delays, V1 population initially encoded a narrowly tuned bump of activation centered on the peripheral memory target. Remarkably, this bump then spread inward toward foveal locations, forming a vector along the trajectory of the forthcoming memory-guided saccade. In other words, the neural code transformed into an abstraction of the stimulus more proximal to memory-guided behavior. Therefore, theories of WM must consider both sensory features and their task-relevant abstractions because changes in the format of memoranda naturally drive neural dynamics. |
Haolun Li; Zhijun Li; Guanyi Lyu; Mi Wang; Bangshan Liu; Yan Zhang; Lingjiang Li; Greg J. Siegle Suicide-relevant information processing in unipolar and bipolar depression: An eye-tracking study Journal Article In: Journal of Psychopathology and Clinical Science, vol. 132, no. 4, pp. 361–371, 2023. @article{Li2023c, Suicide-relevant attentional biases are found in suicide attempters (SAs) with depression. Wenzel and Beck provide a theoretical framework that suggests suicide-related attention biases confer vulnerability to suicide. In this study, we integrated eye-tracking dynamics of suicide-related attentional biases with self-report mea- sures to test their model. A free-viewing eye-tracking paradigm, which simultaneously presented four images with different valences (suicide-related, negative, positive, neutral), was examined in 76 SAs with unipolar or bipolar depression, 66 nonsuicidal depressive participants (ND), and 105 healthy never- depressed healthy control participants (HC). Structural equation modeling (SEM) was used for the theory testing. SA gazed more at suicide-relevant stimuli throughout the 25-s trial compared with ND. SA and ND initially detected suicide-related stimuli faster than HC. Groups did not differ on how often they initially gazed at suicide images or how fast they disengaged away from them. Eye-tracking indices of attentional biases, together with self-reported hopelessness, adequately fit an SEM consistent with Wenzel and Beck's cognitive theory of suicide-related information processing. Potentially, suicide-related attention biases could increase vulnerability to suicidal ideation and eventual suicidal behaviors. |
Deming Li; Ankur A. Butala; Laureano Moro-Velazquez; Trevor Meyer; Esther S. Oh; Chelsey Motley; Jesús Villalba; Najim Dehak Automating analysis of eye movement and feature extraction for different neurodegenerative disorders Journal Article In: Computers in Biology and Medicine, vol. 170, pp. 1–16, 2023. @article{Li2023b, The clinical observation and assessment of extra-ocular movements is common practice in assessing neurode- generative disorders but remains observer-dependent. In the present study, we propose an algorithm that can automatically identify saccades, fixation, smooth pursuit, and blinks using a non-invasive eye tracker. Subsequently, response-to-stimuli-derived interpretable features were elicited that objectively and quantita- tively assess patient behaviors. The cohort analysis encompasses persons with mild cognitive impairment (MCI), Alzheimer's disease (AD), Parkinson's disease (PD), Parkinson's disease mimics (PDM), and controls (CTRL). Overall, results suggested that the AD/MCI and PD groups had significantly different saccade and pursuit characteristics compared to CTRL when the target moved faster or covered a larger visual angle during smooth pursuit. These two groups also displayed more omitted antisaccades and longer average antisaccade latency than CTRL. When reading a text passage silently, people with AD/MCI had more fixations. During visual exploration, people with PD demonstrated a more variable saccade duration than other groups. In the prosaccade task, the PD group showed a significantly smaller average hypometria gain and accuracy, with the most statistical significance and highest AUC scores of features studied. The minimum saccade gain was a PD- specific feature different from CTRL and PDM. These features, as oculographic biomarkers, can be potentially leveraged in distinguishing different types of NDs, yielding more objective and precise protocols to diagnose and monitor disease progression. |
Aoqi Li; Zhenzhong Chen; Jeremy M. Wolfe; Christian N. L. Olivers How do people find pairs? Journal Article In: Journal of Experimental Psychology: General, vol. 152, no. 8, pp. 2190–2204, 2023. @article{Li2023a, Humans continuously scan their visual environment for relevant information. Such visual search behavior has typically been studied with tasks in which the search goal is constant and well-defined, requiring rela- tively little interplay between memory and orienting. Here we studied a situation in which the target is not known in advance, and instead, memory needs to be dynamically updated during the actual search. Observers compared two simultaneously presented arrays of objects for any matching pair of items—a task that requires continuous comparisons between what is seen now and what was seen a few moments ago. To manipulate the balance between memorizing and scanning, we ran two versions of the task. In an eye-tracking version, the objects were continuously available and could be scanned with relative ease. The results suggested that observers preferred scanning over memorizing. In a mouse-tracking version, per- ceptual availability was limited, and scanning was slowed. Now observers substantially increased their memory use. Thus, the results revealed a flexible and dynamic interplay between memory and perception. The findings aid in further bridging the research fields of attention and memory. |
Alison Y. Li; Thomas C. Sprague Awareness of the relative quality of spatial working memory representations Journal Article In: Attention, Perception, & Psychophysics, vol. 85, no. 5, pp. 1710–1721, 2023. @article{Li2023, Working memory (WM) is the ability to maintain and manipulate information no longer accessible in the environment. The brain maintains WM representations over delay periods in noisy population-level activation patterns, resulting in variability in WM representations across items and trials. It is established that participants can introspect aspects of the quality of WM representations, and that they can accurately compare which of several WM representations of stimulus features like orientation or color is better on each trial. However, whether this ability to evaluate and compare the quality of multiple WM representations extends to spatial WM tasks remains unknown. Here, we employed a memory-guided saccade task to test recall errors for remembered spatial locations when participants were allowed to choose the most precise representation to report. Participants remembered either one or two spatial locations over a delay and reported one item's location with a saccade. On trials with two spatial locations, participants reported either the spatial location of a randomly cued item, or the location of the stimulus they remembered best. We found a significant improvement in recall error and increase in response time (RT) when participants reported their best-remembered item compared with trials in which they were randomly cued. These results demonstrate that participants can accurately introspect the relative quality of neural WM representations for spatial position, consistent with previous observations for other stimulus features, and support a model of WM coding involving noisy representations across items and trials. |
Aaron J. Levi; Yuan Zhao; Il Memming Park; Alexander C. Huk Sensory and choice responses in MT distinct from motion encoding Journal Article In: Journal of Neuroscience, vol. 43, no. 12, pp. 2090–2103, 2023. @article{Levi2023, The macaque middle temporal (MT) area is well known for its visual motion selectivity and relevance to motion perception, but the possibility of it also reflecting higher-level cognitive functions has largely been ignored. We tested for effects of task performance distinct from sensory encoding by manipulating subjects' temporal evidence-weighting strategy during a direction discrimination task while performing electrophysiological recordings from groups of MT neurons in rhesus macaques (one male, one female). This revealed multiple components of MT responses that were, surprisingly, not interpretable as behaviorally relevant modulations of motion encoding, or as bottom-up consequences of the readout of motion direction from MT. The time-varying motion-driven responses of MT were strongly affected by our strategic manipulation—but with time courses opposite the subjects' temporal weighting strategies. Furthermore, large choice-correlated signals were represented in population activity distinct from its motion responses, with multiple phases that lagged psychophysical readout and even continued after the stimulus (but which preceded motor responses). In summary, a novel experimental manipulation of strategy allowed us to control the time course of readout to challenge the correlation between sensory responses and choices, and population-level analyses of simultaneously recorded ensembles allowed us to identify strong signals that were so distinct from direction encoding that conventional, single-neuron-centric analyses could not have revealed or properly characterized them. Together, these approaches revealed multiple cognitive contributions to MT responses that are task related but not functionally relevant to encoding or decoding of motion for psychophysical direction discrimination, providing a new perspective on the assumed status of MT as a simple sensory area. |
Mathieu Lesourd; Alia Afyouni; Franziska Geringswald; Fabien Cignetti; Lisa Raoul; Julien Sein; Bruno Nazarian; Jean-Luc Anton; Marie-Hélène Grosbras Action observation network activity related to object-directed and socially-directed actions in adolescents Journal Article In: Journal of Neuroscience, vol. 43, no. 1, pp. 125–141, 2023. @article{Lesourd2023, The human action observation network (AON) encompasses brain areas consistently engaged when we observe other's actions. Although the core nodes of the AON are present from childhood, it is not known to what extent they are sensitive to different action features during development. Because social cognitive abilities continue to mature during adolescence, the AON response to socially-oriented actions, but not to object-related actions, may differ in adolescents and adults. To test this hypothesis, we scanned with functional magnetic resonance imaging (fMRI) male and female typically-developing teenagers (n = 28; 13 females) and adults (n = 25; 14 females) while they passively watched videos of manual actions varying along two dimensions: sociality (i.e., directed toward another person or not) and transitivity (i.e., involving an object or not). We found that action observation recruited the same fronto-parietal and occipito-temporal regions in adults and adolescents. The modulation of voxel-wise activity according to the social or transitive nature of the action was similar in both groups of participants. Multivariate pattern analysis, however, revealed that decoding accuracies in intraparietal sulcus (IPS)/superior parietal lobe (SPL) for both sociality and transitivity were lower for adolescents compared with adults. In addition, in the lateral occipital temporal cortex (LOTC), generalization of decoding across the orthogonal dimension was lower for sociality only in adolescents. These findings indicate that the representation of the content of others' actions, and in particular their social dimension, in the adolescent AON is still not as robust as in adults. |
Pedro Lencastre; Marit Gjersdal; Leonardo Rydin Gorjão; Anis Yazidi; Pedro G. Lind Modern AI versus century-old mathematical models: How far can we go with generative adversarial networks to reproduce stochastic processes? Journal Article In: Physica D: Nonlinear Phenomena, vol. 453, pp. 1–11, 2023. @article{Lencastre2023, The usage of generative adversarial networks (GAN)s for synthetic time-series data generation has been gaining popularity in recent years with applications from finance to music composition and processing of textual content. However, beyond their reported success, few comparisons exist with other artificial intelligence (AI) methods or standard mathematical models. Here, we test GANs performance, comparing them with a well-known mathematical model, namely a Markov chain. We implement comparative metrics based on one- and two-point statistics to evaluate the performance of each method. We find that, similarly to other AI approaches, GANs struggle to capture rare events and cross-feature relations and are unable to create synthetic faithful data. GANs are relatively successful in replicating the auto-correlation function, but they still lag significantly behind simple Markov chains. We also provide a qualitative explanation for this limitation of AI approaches. |
Rony Lemel; Lilach Shalev; Gal Nitsan; Boaz M. Ben-David Listen up! ADHD slows spoken-word processing in adverse listening conditions: Evidence from eye movements Journal Article In: Research in Developmental Disabilities, vol. 133, pp. 1–15, 2023. @article{Lemel2023, Background: Cognitive skills such as sustained attention, inhibition and working memory are essential for speech processing, yet are often impaired in people with ADHD. Offline measures have indicated difficulties in speech recognition on multi-talker babble (MTB) background for young adults with ADHD (yaADHD). However, to-date no study has directly tested online speech processing in adverse conditions for yaADHD. Aims: Gauging the effects of ADHD on segregating the spoken target-word from its sound-sharing competitor, in MTB and working-memory (WM) load. Methods and procedures: Twenty-four yaADHD and 22 matched controls that differ in sustained attention (SA) but not in WM were asked to follow spoken instructions presented on MTB to touch a named object, while retaining one (low-load) or four (high-load) digit/s for later recall. Their eye fixations were tracked. Outcomes and results: In the high-load condition, speech processing was less accurate and slowed by 140ms for yaADHD. In the low-load condition, the processing advantage shifted from early perceptual to later cognitive stages. Fixation transitions (hesitations) were inflated for yaADHD. Conclusions and implications: ADHD slows speech processing in adverse listening conditions and increases hesitation, as speech unfolds in time. These effects, detected only by online eyetracking, relate to attentional difficulties. We suggest online speech processing as a novel purview on ADHD. What this paper adds?: We suggest speech processing in adverse listening conditions as a novel vantage point on ADHD. Successful speech recognition in noise is essential for performance across daily settings: academic, employment and social interactions. It involves several executive functions, such as inhibition and sustained attention. Impaired performance in these functions is characteristic of ADHD. However, to date there is only scant research on speech processing in ADHD. The current study is the first to investigate online speech processing as the word unfolds in time using eyetracking for young adults with ADHD (yaADHD). This method uncovered slower speech processing in multi-talker babble noise for yaADHD compared to matched controls. The performance of yaADHD indicated increased hesitation between the spoken word and sound-sharing alternatives (e.g., CANdle-CANdy). These delays and hesitations, on the single word level, could accumulate in continuous speech to significantly impair communication in ADHD, with severe implications on their quality of life and academic success. Interestingly, whereas yaADHD and controls were matched on WM standardized tests, WM load appears to affect speech processing for yaADHD more than for controls. This suggests that ADHD may lead to inefficient deployment of WM resources that may not be detected when WM is tested alone. Note that these intricate differences could not be detected using traditional offline accuracy measures, further supporting the use of eyetracking in speech tasks. Finally, communication is vital for active living and wellbeing. We suggest paying attention to speech processing in ADHD in treatment and when considering accessibility and inclusion. |
Yen-Lin Lee; Hsuan-Chih Chen; Yu-Chen Chan The attentional bias of gelotophobes towards emotion words containing the Chinese character for ‘laugh': An eye-tracking approach Journal Article In: Current Psychology, vol. 42, no. 19, pp. 16330–16343, 2023. @article{Lee2023c, Gelotophobes are typically characterized by the fear of laughter, social withdrawal, and humorlessness, possibly related to negative experiences of being laughed at in the past. The present study seeks to expand our understanding of gelotophobia through a relatively novel approach: using eye-tracking to investigate the attentional bias of gelotophobes and non-gelotophobes towards negative emotion words that do and do not contain the Chinese character for “laugh,” by comparing responses to negative ridicule words (RID), negative contempt words (CONT), positive pleasure words (PLE) and neutral words (NEU). Results of the start time of the first run of fixations showed that gelotophobes and non-gelotophobes both focused on negative words before other words. Gelotophobes' attentional bias towards RID and CONT was greater than that of non-gelotophobes in first gaze duration, percentage of total viewing duration, total fixation count, and run count, suggesting that gelotophobes had greater difficulty in disengaging their attention from negative to neutral words. Non-gelotophobes' attentional bias, however, towards negative ridicule neutral words (RID-NEU) and negative contempt neutral words (CONT-NEU) was greater than that of gelotophobes, suggesting that non-gelotophobes were more able to shift attention from negative to neutral words. Moreover, gelotophobes paid significantly more attention to RID than CONT, suggesting that gelotophobes displayed a longer and stronger attentional bias towards RID (containing the “laugh” character). Interestingly, there was no difference for PLE between gelotophobes and non-gelotophobes. The present study contributes to our understanding of the attentional bias of gelotophobes and non-gelotophobes towards emotion words. |
Sungyoon Lee The role of spatial ability and attention shifting in reading of illustrated scientific texts: An eye tracking study Journal Article In: Reading Psychology, vol. 44, no. 8, pp. 915–935, 2023. @article{Lee2023b, The purpose of the study is to examine the role of spatial ability and attention shifting in reading of illustrated science texts. Thirty-five fourth/fifth elementary students read two science texts. Prior knowledge and retention/transfer learning outcomes were measured using researcher-developed measures. While reading, students' eye movements were monitored with an eye-tracker. Several eye movement indices were used to reflect reading processes. Fixation count on text/picture was used to represent students' attentional focus on text or picture. Text to text saccades and picture to picture saccades were used to reflect students' information organization. Students' integrative reading behavior was measured by eye movement transitions between text and picture. Wisconsin Card Sorting Test and Visual Perception Skill Test were used to assess attention shifting and visuospatial working memory, respectively. Multiple regressions were conducted to examine whether students' spatial ability and attention shifting predict text processing, picture processing, or integrative processing of text and picture. Hierarchical regressions were conducted to examine whether students' integrative reading make unique and direct contributions to their learning outcomes. The study found that 1) both spatial ability and attention shifting are significant predictors for integrative reading behavior while they are not for other processing behaviors (i.e., text processing and picture processing) and 2) integrative reading behaviors in illustrated text reading account for significant amounts of variance in the transfer outcomes while not in the retention outcomes. This study gives practical implications on the development of visual literacy interventions and on how teachers design their instruction about science text reading. |
Michael J. Lee; James J. DiCarlo How well do rudimentary plasticity rules predict adult visual object learning? Journal Article In: PLoS Computational Biology, vol. 19, no. 12, pp. 1–37, 2023. @article{Lee2023a, A core problem in visual object learning is using a finite number of images of a new object to accurately identify that object in future, novel images. One longstanding, conceptual hypothesis asserts that this core problem is solved by adult brains through two connected mechanisms: 1) the re-representation of incoming retinal images as points in a fixed, multidimensional neural space, and 2) the optimization of linear decision boundaries in that space, via simple plasticity rules applied to a single downstream layer. Though this scheme is biologically plausible, the extent to which it explains learning behavior in humans has been unclear-in part because of a historical lack of image-computable models of the putative neural space, and in part because of a lack of measurements of human learning behaviors in difficult, naturalistic settings. Here, we addressed these gaps by 1) drawing from contemporary, image-computable models of the primate ventral visual stream to create a large set of testable learning models (n = 2,408 models), and 2) using online psychophysics to measure human learning trajectories over a varied set of tasks involving novel 3D objects (n = 371,000 trials), which we then used to develop (and publicly release) empirical benchmarks for comparing learning models to humans. We evaluated each learning model on these benchmarks, and found those based on deep, high-level representations from neural networks were surprisingly aligned with human behavior. While no tested model explained the entirety of replicable human behavior, these results establish that rudimentary plasticity rules, when combined with appropriate visual representations, have high explanatory power in predicting human behavior with respect to this core object learning problem. |
Crystal Lee; Andrew Jessop; Amy Bidgood; Michelle S. Peter; Julian M. Pine; Caroline F. Rowland; Samantha Durrant How executive functioning, sentence processing, and vocabulary are related at 3 years of age Journal Article In: Journal of Experimental Child Psychology, vol. 233, pp. 1–21, 2023. @article{Lee2023, There is a wealth of evidence demonstrating that executive function (EF) abilities are positively associated with language development during the preschool years, such that children with good executive functions also have larger vocabularies. However, why this is the case remains to be discovered. In this study, we focused on the hypothesis that sentence processing abilities mediate the association between EF skills and receptive vocabulary knowledge, in that the speed of language acquisition is at least partially dependent on a child's processing ability, which is itself dependent on executive control. We tested this hypothesis in longitudinal data from a cohort of 3- and 4-year-old children at three age points (37, 43, and 49 months). We found evidence, consistent with previous research, for a significant association between three EF skills (cognitive flexibility, working memory [as measured by the Backward Digit Span], and inhibition) and receptive vocabulary knowledge across this age range. However, only one of the tested sentence processing abilities (the ability to maintain multiple possible referents in mind) significantly mediated this relationship and only for one of the tested EFs (inhibition). The results suggest that children who are better able to inhibit incorrect responses are also better able to maintain multiple possible referents in mind while a sentence unfolds, a sophisticated sentence processing ability that may facilitate vocabulary learning from complex input. |
Thibaut Le Naour; Michael Papinutto; Muriel Lobier; Jean-Pierre Bresciani Controlling the trajectory of a moving object substantially shortens the latency of motor responses to visual stimuli Journal Article In: iScience, vol. 26, no. 6, pp. 1–14, 2023. @article{LeNaour2023, Motor responses to visual stimuli have shorter latencies for controlling than for initiating movement. The shorter latencies observed for movement control are notably believed to reflect the involvement of forward models when controlling moving limbs. We assessed whether controlling a moving limb is a “requisite” to observe shortened response latencies. The latency of button-press responses to a visual stimulus was compared between conditions involving or not involving the control of a moving object, but never involving any actual control of a body segment. When the motor response controlled a moving object, response latencies were significantly shorter and less variable, probably reflecting a faster sensorimotor processing (as assessed fitting a LATER model to our data). These results suggest that when the task at hand entails a control component, the sensorimotor processing of visual information is hastened, and this even if the task does not require to actually control a moving limb. |
Jochen Laubrock; Alexander Krutz; Jonathan Nübel; Sebastian Spethmann Gaze patterns reflect and predict expertise in dynamic echocardiographic imaging Journal Article In: Journal of Medical Imaging, vol. 10, no. S1, pp. 1–20, 2023. @article{Laubrock2023, Purpose: Echocardiography is the most important modality in cardiac imaging. Rapid valid visual assessment is a critical skill for image interpretation. However, it is unclear how skilled viewers assess echocardiographic images. Therefore, guidance and implicit advice are needed for learners to achieve valid image interpretation. Approach: Using a signal detection approach, we compared 15 certified experts with 15 medi- cal students in their diagnostic decision-making and viewing behavior. To quantify attention allocation, we recorded eye movements while viewing dynamic echocardiographic imaging loops of patients with reduced ejection fraction and healthy controls. Participants evaluated left ventricular ejection fraction and image quality (as diagnostic and visual control tasks, respectively). Results: Experts were much better at discriminating between patients and healthy controls (d0 of 2.58, versus 0.98 for novices). Eye tracking revealed that experts fixated diagnostically relevant areas earlier and more often, whereas novices were distracted by visually salient task-irrelevant stimuli. We show that expertise status can be almost perfectly classified either based on judg- ments or purely on eye movements and that an expertise score derived from viewing behavior predicts diagnostic quality. Conclusions: Judgments and eye tracking revealed significant differences between echocardi- ography experts and novices that can be used to derive numerical expertise scores. Experts have implicitly learned to ignore the salient motion cue presented by the mitral valve and to focus on the diagnostically more relevant left ventricle. These findings have implications for echocardi- ography training, objective characterization of echocardiographic expertise, and the design of user-friendly interfaces for echocardiography. |
Katja Langer; Valerie L. Jentsch; Oliver T. Wolf Rapid effects of acute stress on cognitive emotion regulation Journal Article In: Psychoneuroendocrinology, vol. 151, pp. 1–10, 2023. @article{Langer2023, Acute stress has been shown to either enhance or impair emotion regulation (ER) performances. Besides sex, strategy use and stimulus intensity, another moderating factor appears to be timing of the ER task relative to stress exposure. Whereas somewhat delayed increases in the stress hormone cortisol have been shown to improve ER performances, rapid sympathetic nervous system (SNS) actions might oppose such effects via cognitive regulatory impairments. Here, we thus investigated rapid effects of acute stress on two ER strategies: reappraisal and distraction. N = 80 healthy participants (40 men & 40 women) were exposed to the Socially Evaluated Cold-Pressor Test or a control condition immediately prior to an ER paradigm which required them to deliberately downregulate emotional responses towards high intensity negative pictures. Subjective ratings and pupil dilation served as ER outcomes measures. Increases in salivary cortisol and cardiovascular activity (index of SNS activation) verified successful induction of acute stress. Unexpectedly, stress reduced subjective emotional arousal when distracting from negative pictures in men indicating regulatory improvements. However, this beneficial effect was particularly pronounced in the second half of the ER paradigm and fully mediated by already rising cortisol levels. In contrast, cardiovascular responses to stress were linked to decreased subjective regulatory performances of reappraisal and distraction in women. However, no detrimental effects of stress on ER occurred at the group level. Yet, our findings provide initial evidence for rapid, opposing effects of the two stress systems on the cognitive control of negative emotions that are critically moderated by sex. |
Elke B. Lange; Lauren K. Fink Eye blinking, musical processing, and subjective states—A methods account Journal Article In: Psychophysiology, vol. 60, no. 10, pp. 1–20, 2023. @article{Lange2023, Affective sciences often make use of self-reports to assess subjective states. Seeking a more implicit measure for states and emotions, our study explored spontaneous eye blinking during music listening. However, blinking is understudied in the context of research on subjective states. Therefore, a second goal was to explore different ways of analyzing blink activity recorded from infra-red eye trackers, using two additional data sets from earlier studies differing in blinking and viewing instructions. We first replicate the effect of increased blink rates during music listening in comparison with silence and show that the effect is not related to changes in self-reported valence, arousal, or to specific musical features. Interestingly, but in contrast, felt absorption reduced participants' blinking. The instruction to inhibit blinking did not change results. From a methodological perspective, we make suggestions about how to define blinks from data loss periods recorded by eye trackers and report a data-driven outlier rejection procedure and its efficiency for subject-mean analyses, as well as trial-based analyses. We ran a variety of mixed effects models that differed in how trials without blinking were treated. The main results largely converged across accounts. The broad consistency of results across different experiments, outlier treatments, and statistical models demonstrates the reliability of the reported effects. As recordings of data loss periods come for free when interested in eye movements or pupillometry, we encourage researchers to pay attention to blink activity and contribute to the further understanding of the relation between blinking, subjective states, and cognitive processing. |
E. Landová; I. Štolhoferová; B. Vobrubová; J. Polák; K. Sedláčková; M. Janovcová; S. Rádlová; D. Frynta In: Scientific Reports, vol. 13, no. 1, pp. 1–13, 2023. @article{Landova2023, Spiders are among the animals evoking the highest fear and disgust and such a complex response might have been formed throughout human evolution. Ironically, most spiders do not present a serious threat, so the evolutionary explanation remains questionable. We suggest that other chelicerates, such as scorpions, have been potentially important in the formation and fixation of the spider-like category. In this eye-tracking study, we focused on the attentional, behavioral, and emotional response to images of spiders, scorpions, snakes, and crabs used as task-irrelevant distractors. Results show that spider-fearful subjects were selectively distracted by images of spiders and crabs. Interestingly, these stimuli were not rated as eliciting high fear contrary to the other animals. We hypothesize that spider-fearful participants might have mistaken crabs for spiders based on their shared physical characteristics. In contrast, subjects with no fear of spiders were the most distracted by snakes and scorpions which supports the view that scorpions as well as snakes are prioritized evolutionary relevant stimuli. We also found that the reaction time increased systematically with increasing subjective fear of spiders only when using spiders (and crabs to some extent) but not snakes and scorpions as distractors. The maximal pupil response covered not only the attentional and cognitive response but was also tightly correlated with the fear ratings of the picture stimuli. However, participants' fear of spiders did not affect individual reactions to scorpions measured by the maximal pupil response. We conclude that scorpions are evolutionary fear-relevant stimuli, however, the generalization between scorpions and spiders was not supported in spider-fearful participants. This result might be important for a better understanding of the evolution of spider phobia. |
Yuxiang Lan; Qunyue Liu; Zhipeng Zhu Exploring landscape design intensity effects on visual preferences and eye fixations in urban forests: Insights from eye tracking technology Journal Article In: Forests, vol. 14, no. 8, pp. 1–16, 2023. @article{Lan2023, Individuals' preferences for urban forest scenes are an essential factor in the design process. This study explores the connection between landscape design intensity, visual preferences, and eye fixations in urban forest scenes. Five pictures representing different urban forest scenes (plaza, lawn, garden path, pond, and rockery) were selected as stimuli, representing the original landscape design intensity. Three additional levels of design intensity (low, moderate, and high) were created by modifying the landscape elements of the original picture. A group of 50 participants was randomly assigned to observe the four levels of design intensity pictures within each type of landscape using eye-tracking technology. They also rated their preferences for each scene. In total, 250 participants took part in the study, with five groups observing five types of urban forest scenes. The results indicate that landscape design intensity has a positive impact on visual preferences, with moderate design intensity showing the strongest effect. However, the influence of design intensity and preferences also depends on the specific landscape scene. The fixation data did not show a significant relationship with design intensity but were associated with the type of landscape scene. In conclusion, this study suggests that moderate design intensity is recommended for urban forest design. However, it also highlights the importance of considering the specific landscape scene type. The research provides valuable insights into urban forest design and contributes to the understanding of eye-tracking technology in landscape perception studies. |
Charlene L. M. Lam; Tom J. Barry; Jenny Yiend; Tatia M. C. Lee The role of consciousness in threat extinction learning Journal Article In: Consciousness and Cognition, vol. 116, pp. 1–12, 2023. @article{Lam2023, Extinction learning is regarded as a core mechanism underlying exposure therapy. The extent to which learned threats can be extinguished without conscious awareness is a controversial and on-going debate. We investigated whether implicit vs. explicit exposure to a threatened stimulus can modulate defence responses measured using pupillometry. Healthy participants underwent a threat conditioning paradigm in which one of the conditioned stimuli (CS) was perceptually suppressed using continuous flash suppression (CFS). Participants' pupillary responses, CS pleasantness ratings, and trial-by-trial awareness of the CS were recorded. During Extinction, participants' pupils dilated more in the trials in which they were unaware of the CS than in those in which they were aware of it (Cohen's d = 0.57). After reinstatement, the percentage of fear recovery was greater for the CFS-suppressed CS than the CS with full awareness. The current study suggests that the modulation of fear responses by extinction with reduced visual awareness is weaker compared to extinction with full perceptual awareness. |
Yao-Ying Lai; David Braze; Maria Mercedes Piñango The time-course of contextual modulation for underspecified meaning Journal Article In: The Mental Lexicon, vol. 18, no. 1, pp. 41–93, 2023. @article{Lai2023a, Sentences like (1) “ The singer began the album” are ambiguous between an agentive reading (The singer began recording/playing/etc. the album) and a constitutive reading (The singer's song was the first track). The ambiguity is rooted in the meaning specification of the aspectual-verb class, which demands its complement be construed as a structured individual along a dimension (e.g., spatial, informational, eventive). In (1), the complement can be construed as a set of eventualities (eventive) or musical content (informational). Processing aspectual-verb sentences is shown to involve (a) exhaustive lexical-function retrieval and (b) construal of multiple dimension-specific structured individuals, leading to multiple compositions with agentive and constitutive readings. The ultimate interpretation depends on the biased dimensions in context. Our eye-tracking study comparing sentences in different contexts (agentive vs. constitutive-biasing) shows not only the aspectual-verb composition effect, previously reported for the agentive readings, but also a comparable processing profile for the constitutive readings, a novel finding supporting the unified linguistic analysis and processing implementation of the two readings. Regardless of reading, the composition effect is observable even after the complement has been retrieved, indicating that the fundamental lexico-semantic compositional processes must take place before context can serve as a constraining force. |
Cheng-Ji Lai; Li-You Chang In: Social Sciences and Humanities Open, vol. 8, no. 1, pp. 1–8, 2023. @article{Lai2023, This study investigated how undergraduate students with different levels of translation proficiency employed translation principles and techniques in English-Chinese sight translation tasks, and how this affected their cognitive processing and performance. Participants were grouped into high-, intermediate-, and low-levels based on placement tests, and completed pre- and post-tests after a translation course. Their use of three translation principles (fidelity, fluency, and elegance) and techniques (segmentation, conversion, and addition) was measured using EyeLink eye tracking, and participants were interviewed to evaluate their metacognitive reflections on their translations. The results show that the high- and intermediate-level groups completed the sight translation post-test faster than the pre-test. The use of segmentation, restructuring, and conversion techniques was found to benefit students the most in sight translation tasks, and the intermediate-level group outperformed the other groups by making a greater cognitive effort in restructuring and refining their translations to achieve a higher level of competence in the elegance principle. The study provides pedagogical implications and scholarly significance for the application of translation principles and techniques in sight translation between Chinese and English. |
Hend Lahoud; David L. Share; Adi Shechter A developmental study of eye movements in Hebrew word reading: The effects of word familiarity, word length, and reading proficiency Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–11, 2023. @article{Lahoud2023a, Previous studies examining the link between visual word recognition and eye movements have shown that eye movements reflect the time-course of cognitive processes involved in reading. Whereas most studies have been undertaken in Western European languages written in the Roman alphabet, the present developmental study investigates a non-European language—Hebrew, which is written in a non-alphabetic (abjadic) script. We compared the eye-movements of children in Grades 4 to 6 (N = 30) and university students (N = 30) reading familiar real words and unfamiliar (pseudo)words of 3 letters and 5 letters in length. Using linear mixed models, we focused on the effects of word familiarity, word length, and age group. Our results highlight both universal aspects of word reading (developmental and familiarity (lexicality) effects) as well as language-specific word length effect which appears to be related to the unique morphological and orthographic features of the Semitic abjad. |
Hend Lahoud; Zohar Eviatar; Hamutal Kreiner Eye-movement patterns in skilled Arabic readers: Effects of specific features of Arabic versus universal factors Journal Article In: Reading and Writing, pp. 1–30, 2023. @article{Lahoud2023, This study aims to shed light on the contribution of universal versus language specific factors on reading. We examined eye movements of Arabic readers and analyzed effects specific to Arabic such as perceptual complexity, diglossia and morphology, in addition to universal factors such as word length and frequency. Twenty native Arabic speakers read continuous texts in Modern Standard Arabic (MSA) while their eye movements were monitored. A corpus-based analyses was carried to test effects specific to Arabic and effects of the benchmark eye movement factors. We found that perceptually more complex words received longer fixation durations, moreover, differences in processing words unique in MSA versus words shared between MSA and spoken Arabic Vernacular were found. This is the first indication for these effects during an eye movement reading task. However, the effect of morphological length was not significant when included in the model with all predictors. Lastly, the benchmark factors were significant showing effects for word length, word frequency and part of speech. Short and frequent words are processed faster than longer and less frequent words. Function words are often skipped. We conclude that eye movement of Arabic readers reflect proficient reading, yet they also exhibit an on-going challenge in processing the written language. |
Sol Lago; Kate Stone; Elise Oltrogge; João Veríssimo Possessive processing in bilingual comprehension Journal Article In: Language Learning, vol. 73, no. 3, pp. 904–941, 2023. @article{Lago2023, Second language (L2) learners make gender errors with possessive pronouns. In production, these errors are modulated by the gender match between the possessor and possessee noun. We examined whether this so-called match effect extends to L2 comprehension by attempting to replicate a recent study on gender predictions in first language (L1) German speakers (Stone, Veríssimo, et al., 2021). By comparing Spanish and English learners of L2 German whose languages have different possessive constraints, we were able to examine whether the match effect was modulated by the participants' L1. A first experiment suggested that predictions and match effects were absent in setups with complex visual displays. A second experiment with simpler displays successfully elicited predictions and match effects, but their size was comparable in Spanish and English speakers, inconsistent with crosslinguistic influence. We interpret our results as evidence that processing difficulties with possessives result from memory interference that impacts both L1 and L2 comprehenders. |
Rosa Lafer-Sousa; Karen Wang; Reza Azadi; Emily Lopez; Simon Bohn; Arash Afraz Behavioral detectability of optogenetic stimulation of inferior temporal cortex varies with the size of concurrently viewed objects Journal Article In: Current Research in Neurobiology, vol. 4, pp. 1–7, 2023. @article{LaferSousa2023, We have previously demonstrated that macaque monkeys can behaviorally detect a subtle optogenetic impulse delivered to their inferior temporal (IT) cortex. We have also shown that the ability to detect the cortical stimulation impulse varies depending on some characteristics of the visual images viewed at the time of brain stimulation, revealing the visual nature of the perceptual events induced by stimulation of the IT cortex. Here we systematically studied the effect of the size of viewed objects on behavioral detectability of optogenetic stimulation of the central IT cortex. Surprisingly, we found that behavioral detection of the same optogenetic impulse highly varies with the size of the viewed object images. Reduction of the object size in four steps from 8 to 1 degree of visual angle significantly decreased detection performance. These results show that identical stimulation impulses delivered to the same neural population induce variable perceptual events depending on the mere size of the objects viewed at the time of brain stimulation. |
Marianna Kyriacou; Kathy Conklin; Dominic Thompson Ambiguity resolution in passivized idioms: Is there a shift in the most likely interpretation? Journal Article In: Canadian Journal of Experimental Psychology, vol. 77, no. 3, pp. 212–226, 2023. @article{Kyriacou2023, Ambiguous but canonical idioms (kick the bucket) are processed fast in both their figurative (“die”) and literal (“boot the pail”) senses, although processing costs associated with meaning integration may emerge in postidiom regions. Modified versions (the bucket was kicked) are processed more slowly than canonical configurations when intended figuratively. We hypothesized that modifications delay idiom recognition and prioritize the literal meaning, yielding processing costs when the context warrants a figurative interpretation. To test this, we designed an eye-tracking study, where passivized idioms were followed by “WABBLE” relating to their literal (bucket—water) or figurative (dead—body) meaning, or were incongruent (time). The remaining context was identical. The findings showed a facilitation for the literal meaning: WABBLE and passivized idioms in the literal condition were read significantly faster in go-past and total reading time, respectively, compared to both the figurative and control conditions. However, both literal and figurative WABBLE were processed equally fast (and significantly faster than controls) in total reading time. In support of our hypothesis, the literal meaning of passivized idioms appears to be more highly activated and easier to integrate, although the figurative meaning receives some activation that facilitates its (full) retrieval if necessary. |
Yuna Kwak; Nina M. Hanning; Marisa Carrasco Presaccadic attention sharpens visual acuity Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–11, 2023. @article{Kwak2023, Visual perception is limited by spatial resolution, the ability to discriminate fine details. Spatial resolution not only declines with eccentricity but also differs for polar angle locations around the visual field, also known as ‘performance fields'. To compensate for poor peripheral resolution, we make rapid eye movements—saccades—to bring peripheral objects into high-acuity foveal vision. Already before saccade onset, visual attention shifts to the saccade target location and prioritizes visual processing. This presaccadic shift of attention improves performance in many visual tasks, but whether it changes resolution is unknown. Here, we investigated whether presaccadic attention sharpens peripheral spatial resolution; and if so, whether such effect interacts with performance fields asymmetries. We measured acuity thresholds in an orientation discrimination task during fixation and saccade preparation around the visual field. The results revealed that presaccadic attention sharpens acuity, which can facilitate a smooth transition from peripheral to foveal representation. This acuity enhancement is similar across the four cardinal locations; thus, the typically robust effect of presaccadic attention does not change polar angle differences in resolution. |
Nawras Kurzom; Ilaria Lorenzi; Avi Mendelsohn Increasing the complexity of isolated musical chords benefits concurrent associative memory formation Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–12, 2023. @article{Kurzom2023, The effects of background music on learning and memory are inconsistent, partially due to the intrinsic complexity and diversity of music, as well as variability in music perception and preference. By stripping down musical harmony to its building blocks, namely discrete chords, we explored their effects on memory formation of unfamiliar word-image associations. Chords, defined as two or more simultaneously played notes, differ in the number of tones and inter-tone intervals, yielding varying degrees of harmonic complexity, which translate into a continuum of consonance to dissonance percepts. In the current study, participants heard four different types of musical chords (major, minor, medium complex, and high complex chords) while they learned new word-image pairs of a foreign language. One day later, their memory for the word-image pairs was tested, along with a chord rating session, in which they were required to assess the musical chords in terms of perceived valence, tension, and the extent to which the chords grabbed their attention. We found that musical chords containing dissonant elements were associated with higher memory performance for the word-image pairs compared with consonant chords. Moreover, tension positively mediated the relationship between roughness (a key feature of complexity) and memory, while valence negatively mediated this relationship. The reported findings are discussed in light of the effects that basic musical features have on tension and attention, in turn affecting cognitive processes of associative learning. |
Jan W. Kurzawski; Maria Pombo; Augustin Burchell; Nina M. Hanning; Simon Liao; Najib J. Majaj; Denis G. Pelli EasyEyes — A new method for accurate fixation in online vision testing Journal Article In: Frontiers in Human Neuroscience, vol. 17, pp. 1–12, 2023. @article{Kurzawski2023a, Online methods allow testing of larger, more diverse populations, with much less effort than in-lab testing. However, many psychophysical measurements, including visual crowding, require accurate eye fixation, which is classically achieved by testing only experienced observers who have learned to fixate reliably, or by using a gaze tracker to restrict testing to moments when fixation is accurate. Alas, both approaches are impractical online as online observers tend to be inexperienced, and online gaze tracking, using the built-in webcam, has a low precision (±4 deg). EasyEyes open-source software reliably measures peripheral thresholds online with accurate fixation achieved in a novel way, without gaze tracking. It tells observers to use the cursor to track a moving crosshair. At a random time during successful tracking, a brief target is presented in the periphery. The observer responds by identifying the target. To evaluate EasyEyes fixation accuracy and thresholds, we tested 12 naive observers in three ways in a counterbalanced order: first, in the laboratory, using gaze-contingent stimulus presentation; second, in the laboratory, using EasyEyes while independently monitoring gaze using EyeLink 1000; third, online at home, using EasyEyes. We find that crowding thresholds are consistent and individual differences are conserved. The small root mean square (RMS) fixation error (0.6 deg) during target presentation eliminates the need for gaze tracking. Thus, this method enables fixation-dependent measurements online, for easy testing of larger and more diverse populations. |
Jan W. Kurzawski; Augustin Burchell; Darshan Thapa; Jonathan Winawer; Najib J. Majaj; Denis G. Pelli The Bouma law accounts for crowding in 50 observers Journal Article In: Journal of Vision, vol. 23, no. 8, pp. 1–34, 2023. @article{Kurzawski2023, Crowding is the failure to recognize an object due to surrounding clutter. Our visual crowding survey measured 13 crowding distances (or “critical spacings”) twice in each of 50 observers. The survey includes three eccentricities (0, 5, and 10 deg), four cardinal meridians, two orientations (radial and tangential), and two fonts (Sloan and Pelli). The survey also tested foveal acuity, twice. Remarkably, fitting a two-parameter model—the well-known Bouma law, where crowding distance grows linearly with eccentricity—explains 82% of the variance for all 13 × 50 measured log crowding distances, cross-validated. An enhanced Bouma law, with factors for meridian, crowding orientation, target kind, and observer, explains 94% of the variance, again cross-validated. These additional factors reveal several asymmetries, consistent with previous reports, which can be expressed as crowding-distance ratios: 0.62 horizontal:vertical, 0.79 lower:upper, 0.78 right:left, 0.55 tangential:radial, and 0.78 Sloan-font:Pelli-font. Across our observers, peripheral crowding is independent of foveal crowding and acuity. Evaluation of the Bouma factor, b (the slope of the Bouma law), as a biomarker of visual health would be easier if there were a way to compare results across crowding studies that use different methods.We define a standardized Bouma factor b' that corrects for differences from Bouma's 25 choice alternatives, 75% threshold criterion, and linearly symmetric flanker placement. For radial crowding on the right meridian, the standardized Bouma factor b' is 0.24 for this study, 0.35 for Bouma (1970), and 0.30 for the geometric mean across five representative modern studies, including this one, showing good agreement across labs, including Bouma's. Simulations, confirmed by data, show that peeking can skew estimates of crowding (e.g., greatly decreasing the mean or doubling the SD of log b). Using gaze tracking to prevent peeking, individual differences are robust, as evidenced by the much larger 0.08 SD of log b across observers than the mere 0.03 test–retest SD of log b measured in half an hour. The ease of measurement of crowding enhances its promise as a biomarker for dyslexia and visual health. |
Jens Kürten; Tim Raettig; Julian Gutzeit; Lynn Huestegge In: Psychological Research, vol. 87, no. 2, pp. 410–424, 2023. @article{Kuerten2023a, Previous research has shown that the simultaneous execution of two actions (instead of only one) is not necessarily more difficult but can actually be easier (less error-prone), in particular when executing one action requires the simultaneous inhibition of another action. Corresponding inhibitory demands are particularly challenging when the to-be-inhibited action is highly prepotent (i.e., characterized by a strong urge to be executed). Here, we study a range of important potential sources of such prepotency. Building on a previously established paradigm to elicit dual-action benefits, participants responded to stimuli with single actions (either manual button press or saccade) or dual actions (button press and saccade). Crucially, we compared blocks in which these response demands were randomly intermixed (mixed blocks) with pure blocks involving only one type of response demand. The results highlight the impact of global (action-inherent) sources of action prepotency, as reflected in more pronounced inhibitory failures in saccade vs. manual control, but also more local (transient) sources of influence, as reflected in a greater probability of inhibition failures following trials that required the to-be-inhibited type of action. In addition, sequential analyses revealed that inhibitory control (including its failure) is exerted at the level of response modality representations, not at the level of fully specified response representations. In sum, the study highlights important preconditions and mechanisms underlying the observation of dual-action benefits. |
Jens Kürten; Tim Raettig; Julian Gutzeit; Lynn Huestegge Preparing for simultaneous action and inaction: Temporal dynamics and target levels of inhibitory control Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 49, no. 7, pp. 1068–1082, 2023. @article{Kuerten2023, When a single action is required, along with the simultaneous inhibition of another action, this typically results in frequent false-positive executions of the latter (inhibition failures). The absence of inhibitory demands in dual-action trials can render performance less error-prone (and sometimes faster) than in single-action trials. In the present study, we investigated the temporal dynamics of inhibitory control difficulties by varying the preparation time (for simultaneous action execution and inhibition). In two experiments, participants responded to a single peripheral visual target either with an eye movement toward it (Single Saccade), with a spatially corresponding button press (Single Manual), or with both responses simultaneously (Dual Action) as indicated by a color cue. Preparation time was manipulated via the cue-stimulus interval within blocks (Experiment 1) and between blocks (Experiment 2). Overall, responses were faster with longer (vs. shorter) preparation time. Crucially, however, our results reveal the exact dynamics of how inhibition failures (and thus dual-action benefits) in both response modalities substantially decrease with longer preparation, even though the cue did not contain information regarding the fully specified response that needed to be inhibited (i.e., its direction). These results highlight the role of sufficient preparation time not only for efficient action execution but also for concurrent inhibitory performance. The study contradicts the idea that inhibition can only be exerted globally or on the level of a fully specified response. Instead, it may also be directed at effector system representations or all associated responses, suggesting a highly flexible targeting of inhibitory control in cognition. |
Didem Kurt; Nazik Dinçtopal Deniz Processing focus in Turkish Journal Article In: Languages, vol. 8, no. 1, pp. 1–22, 2023. @article{Kurt2023, The immediately preverbal position has been argued to be the default focus position in Turkish. In absence of any overt focus markers, the constituent in this position is considered to carry sentential stress and neutral information for canonical word-order sentences and focus is projected to the whole sentence in the form of broad focus. In non-canonical word-order sentences, the immediately preverbal constituent is presumed to carry focal stress and the focused constituent would receive narrow focus. This paper tested this claim experimentally. The paper also investigated if there were any differences in the cognitive operations associated with processing and revising focus in canonical and non-canonical sentences. There were a sentence completion task and an eye-tracking experiment. The sentence completion data and the eye-tracking data supported the theoretical predictions: the immediately preverbal position was associated with default focus in Turkish when no pitch accentuation or other focus markers were available. The eye-tracking data further showed that changes to word-order were perceived as cues for broad versus narrow focus marking. The participants' processing of and revision from narrow focus were costlier than processing broad focus and assigning narrow focus for the first time. We argue, in line with previous research, that this may be due to deeper encoding of focused information in memory or heavier memory load resulting from keeping a set of alternatives of the focused constituent when it has contrastive meaning. |
Eswar Kurni; Manish Reddy Yedulla; PremNandhini Satgunam Microsaccadic eye movement orientations are equivocal in the presence of competing stimuli Journal Article In: Asian Journal of Physics, vol. 32, no. 3&4, pp. 159–166, 2023. @article{Kurni2023, Will someone reflexively look towards a primed target or to a non-primed target, when no instructions are given? Knowing this could help design visual function tests without the need for instructions. Simply, a target could be presented for a “priming phase” followed by two targets one of which is the primed target and the other is not. We asked the question to which target will an obsever look. We studied this on normally-sighted adults. Eye movements were tracked using EyeLink1000 Plus eye tracker and microsaccades were analyzed. The targets presented were from LEA symbols that are commonly used in children's visual acuity chart. Target size (15', 20' or 25') and presentation duration (200, 400 or 600 ms) were randomized. No instructions were given to the participants beyond asking them to look at the computer monitor in experiment I, and instructions were given to specifically look towards the primed target in experiment II. Overall we found that no preference (proportion of microsaccades <50%) was observed either to the primed or to the novel target in either of the experiments. The presence of two competing stimuli abolishes the microsaccde orientation to a target of interest, even with explicit verbal instructions. |
Victor Kuperman; Noam Siegelman; Sascha Schroeder; Cengiz Acartürk; Svetlana Alexeeva; Simona Amenta; Raymond Bertram; Rolando Bonandrini; Marc Brysbaert; Daria Chernova; Sara Maria Da Fonseca; Nicolas Dirix; Wouter Duyck; Argyro Fella; Ram Frost; Carolina A. Gattei; Areti Kalaitzi; Kaidi Lõo; Marco Marelli; Kelly Nisbet; Timothy C. Papadopoulos; Athanassios Protopapas; Satu Savo; Diego E. Shalom; Natalia Slioussar; Roni Stein; Longjiao Sui; Analí Taboh; Veronica Tønnesen; Kerem Alp Usal; Veronica Tonnesen; Kerem Alp Usal Text reading in English as a second language: Evidence from the Multilingual Eye-Movements Corpus Journal Article In: Studies in Second Language Acquisition, vol. 45, no. 1, pp. 3–37, 2023. @article{Kuperman2023, Research into second language (L2) reading is an exponentially growing field. Yet, it still has a relatively short supply of comparable, ecologically valid data from readers representing a variety of first languages (L1). This article addresses this need by presenting a new data resource called MECO L2 (Multilingual Eye Movements Corpus), a rich behavioral eye-tracking record of text reading in English as an L2 among 543 university student speakers of 12 different L1s. MECO L2 includes a test battery of component skills of reading and allows for a comparison of the participants' reading performance in their L1 and L2. This data resource enables innovative large-scale cross-sample analyses of predictors of L2 reading fluency and comprehension. We first introduce the design and structure of the MECO L2 resource, along with reliability estimates and basic descriptive analyses. Then, we illustrate the utility of MECO L2 by quantifying contributions of four sources to variability in L2 reading proficiency proposed in prior literature: reading fluency and comprehension in L1, proficiency in L2 component skills of reading, extralinguistic factors, and the L1 of the readers. Major findings included (a) a fundamental contrast between the determinants of L2 reading fluency versus comprehension accuracy, and (b) high within-participant consistency in the real-time strategy of reading in L1 and L2. We conclude by reviewing the implications of these findings to theories of L2 acquisition and outline further directions in which the new data resource may support L2 reading research. |
Wupadrasta Santosh Kumar; Supratim Ray Healthy ageing and cognitive impairment alter EEG functional connectivity in distinct frequency bands Journal Article In: European Journal of Neuroscience, vol. 58, no. 6, pp. 3432–3449, 2023. @article{Kumar2023, Functional connectivity (FC) indicates the interdependencies between brain signals recorded from spatially distinct locations in different frequency bands, which is modulated by cognitive tasks and is known to change with ageing and cognitive disorders. Recently, the power of narrow-band gamma oscillations induced by visual gratings have been shown to reduce with both healthy ageing and in subjects with mild cognitive impairment (MCI). However, the impact of ageing/MCI on stimulus-induced gamma FC has not been well studied. We recorded electroencephalogram (EEG) from a large cohort (N = 229) of elderly subjects (>49 years) while they viewed large cartesian gratings to induce gamma oscillations and studied changes in alpha and gamma FC with healthy ageing (N = 218) and MCI (N = 11). Surprisingly, we found distinct differences across age and MCI groups in power and FC. With healthy ageing, alpha power did not change but FC decreased significantly. MCI reduced gamma but not alpha FC significantly compared with age and gender matched controls, even when power was matched between the two groups. Overall, our results suggest distinct effects of ageing and disease on EEG power and FC, suggesting different mechanisms underlying ageing and cognitive disorders. |
Mrinmayi Kulkarni; Allison E. Nickel; Greta N. Minor; Deborah E. Hannula; Mrinmayi Kulkarni; Allison E. Nickel; Greta N. Minor; Deborah E. Hannula Control of memory retrieval alters memory-based eye movements Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–22, 2023. @article{Kulkarni2023, Past work has shown that eye movements are affected by long-term memory across different tasks and instructional manipulations. In the current study, we tested whether these memory-based eye movements persist when memory retrieval is under intentional control. Participants encoded multiple scenes with six objects (three faces; three tools). Next, they completed a memory regulation and visual search task, while undergoing eye tracking. Here, scene cues were presented and participants either retrieved the encoded associate, suppressed it, or substituted it with a specific object from the other encoded category. Following a delay, a search display consisting of six dots intermixed with the six encoded objects was presented. Participants' task was to fixate one remaining dot after five had disappeared. Incidental viewing of the objects was of interest. Results revealed that performance in a final recognition phase was impaired for suppressed pairs, but only when the associate was a tool. During the search task, incidental associate viewing was lower when participants attempted to control retrieval, whereas one object from the nonassociate category was most viewed in the substitute condition. Additionally, viewing patterns in the search phase were related to final recognition performance, but the direction of this association differed between conditions. Overall, these results suggest that eye movements are attracted to information retrieved from long-term memory and held active (the associate in the retrieve condition, or an object from the other category in the sub- stitute condition). Furthermore, the level of viewing may index the strength of the representation of retrieved information. |
Jan Kujala; Sasu Mäkelä; Pauliina Ojala; Jukka Hyönä; Riitta Salmelin Beta- and gamma-band cortico-cortical interactions support naturalistic reading of continuous text Journal Article In: European Journal of Neuroscience, pp. 1–14, 2023. @article{Kujala2023, Large-scale integration of information across cortical structures, building on neural connectivity, has been proposed to be a key element in supporting human cognitive processing. In electrophysiological neuroimaging studies of reading, quantification of neural interactions has been limited to the level of isolated words or sentences due to artefacts induced by eye movements. Here, we combined magnetoencephalography recording with advanced artefact rejection tools to investigate both cortico-cortical coherence and directed neural interactions during naturalistic reading of full-page texts. Our results show that reading versus visual scanning of text was associated with wide-spread increases of cortico-cortical coherence in the beta and gamma bands. We further show that the reading task was linked to increased directed neural interactions compared to the scanning task across a sparse set of connections within a wide range of frequencies. Together, the results demonstrate that neural connectivity flexibly builds on different frequency bands to support continuous natural reading. |
Justin B. Kueser; Ryan Peters; Arielle Borovsky The role of semantic similarity in verb learning events: Vocabulary-related changes across early development Journal Article In: Journal of Experimental Child Psychology, vol. 226, pp. 1–19, 2023. @article{Kueser2023, Verb meaning is challenging for children to learn across varied events. This study examined how the taxonomic semantic similarity of the nouns in novel verb learning events in a progressive alignment learning condition differed from the taxonomic dissimilarity of nouns in a dissimilar learning condition in supporting near (similar) and far (dissimilar) verb generalization to novel objects in an eye-tracking task. A total of 48 children in two age groups (23 girls; younger: 21–24 months |
Emmanouil Ktistakis; Panagiotis Simos; Miltiadis K. Tsilimbaris; Sotiris Plainis Efficacy οf wet age-related macular degeneration treatment οn reading: A pilot study using eye-movement analysis Journal Article In: Optometry and Vision Science, vol. 100, no. 10, pp. 670–678, 2023. @article{Ktistakis2023, SIGNIFICANCE Functional vision, as evaluated with silent passage reading speed, improves after anti-vascular endothelial growth factor (anti-VEGF) treatment in patients with wet age-related macular antidegeneration (wAMD), reflecting primarily a concomitant reduction in the number of fixations. Implementing eye movement analysis when reading may better characterize the effectiveness of therapeutic approaches in wAMD. PURPOSE This study aimed to evaluate silent reading performance by means of eye fixation analysis before and after anti-VEGF treatment in wAMD patients. METHODS Sixteen wAMD patients who underwent anti-VEGF treatment in one eye and visual acuity (VA) better than 0.5 logMAR served as the AMD group. Twenty adults without ocular pathology served as the control group. Central retinal thickness and near VA were assessed at baseline and 3 to 4 months after their first visit. Reading performance was evaluated using short passages of 0.4-logMAR print size. Eye movements were recorded using EyeLink II video eye tracker. Data analysis included computation of reading speed, fixation duration, number of fixations, and percentage of regressions. Frequency distributions of fixation durations were analyzed with ex-Gaussian fittings. RESULTS In the AMD group, silent reading speed in the treated eye correlated well with central retinal thickness reduction and improved significantly by an average of 15.9 ± 28.5 words per minute (P =.04). This improvement was accompanied by an average reduction of 0.24 ± 0.38 in fixations per word (P =.03). The corresponding improvement in monocular VA was not statistically significant. Other eye fixation parameters did not change significantly after treatment. No statistically significant differences were found in the control group. CONCLUSIONS Visual acuity tests may underestimate the potential therapeutic effects after anti-VEGF treatment in patients with relatively good acuity who are being treated for wAMD. Evaluating silent reading performance and eye fixation parameters may better characterize the effectiveness of therapeutic approaches in wAMD patients. |
Alina Krug; Lisa Valentina Eberhardt; Anke Huckauf Transient attention does not alter the eccentricity effect in estimation of duration Journal Article In: Attention, Perception, & Psychophysics, pp. 1–12, 2023. @article{Krug2023, Previous research investigating the influence of stimulus eccentricity on perceived duration showed an increasing duration underestimation with increasing eccentricity. Based on studies showing that precueing the stimulus location prolongs perceived duration, one might assume that this eccentricity effect is influenced by spatial attention. In the present study, we assessed the influence of transient covert attention on the eccentricity effect in duration estimation in two experiments, one online and one in a laboratory setting. In a duration estimation task, participants judged whether a comparison stimulus presented near or far from fixation with a varying duration was shorter or longer than a standard stimulus presented foveally with a constant duration. To manipulate transient covert attention, either a transient luminance cue was used (valid cue) to direct attention to the position of the subsequent peripheral comparison stimulus or all positions were marked by luminance (neutral cue). Results of both experiments yielded a greater underestimation of duration for the far than for the near stimulus, replicating the eccentricity effect. Although cueing was effective (i.e., shorter response latencies for validly cued stimuli), cueing did not alter the eccentricity effect on estimation of duration. This indicates that cueing leads to covert attentional shifts but does not account for the eccentricity effect in perceived duration. |
Philipp Kreyenmeier; Anna Schroeger; Rouwen Cañal-Bruland; Markus Raab; Miriam Spering Rapid audiovisual integration guides predictive actions Journal Article In: eNeuro, vol. 10, no. 8, pp. 1–10, 2023. @article{Kreyenmeier2023, Natural movements, such as catching a ball or capturing prey, typically involve multiple senses. Yet, laboratory studies on human movements commonly focus solely on vision and ignore sound. Here, we ask how visual and auditory signals are integrated to guide interceptive movements. Human observers tracked the brief launch of a simulated baseball, randomly paired with batting sounds of varying intensities, and made a quick pointing movement at the ball. Movement end points revealed systematic overestimation of target speed when the ball launch was paired with a loud versus a quiet sound, although sound was never informative. This effect was modulated by the availability of visual information; sounds biased interception when the visual presentation duration of the ball was short. Amplitude of the first catch-up saccade, occurring;125 ms after target launch, revealed early integration of audiovisual information for trajectory estimation. This sound-induced bias was reversed during later predictive saccades when more visual information was available. Our findings sug-gest that auditory and visual signals are integrated to guide interception and that this integration process must occur early at a neural site that receives auditory and visual signals within an ultrashort time span. |
Isabel Kreis; Lei Zhang; Matthias Mittner; Leonard Syla; Claus Lamm; Gerit Pfuhl In: Cognitive, Affective, & Behavioral Neuroscience, vol. 23, no. 3, pp. 727–741, 2023. @article{Kreis2023, Aberrant belief updating due to misestimation of uncertainty and an increased perception of the world as volatile (i.e., unstable) has been found in autism and psychotic disorders. Pupil dilation tracks events that warrant belief updating, likely reflecting the adjustment of neural gain. However, whether subclinical autistic or psychotic symptoms affect this adjustment and how they relate to learning in volatile environments remains to be unraveled. We investigated the relationship between behavioral and pupillometric markers of subjective volatility (i.e., experience of the world as unstable), autistic traits, and psychotic-like experiences in 52 neurotypical adults with a probabilistic reversal learning task. Computational modeling revealed that participants with higher psychotic-like experience scores overestimated volatility in low-volatile task periods. This was not the case for participants scoring high on autistic-like traits, who instead showed a diminished adaptation of choice-switching behavior in response to risk. Pupillometric data indicated that individuals with higher autistic- or psychotic-like trait and experience scores differentiated less between events that warrant belief updating and those that do not when volatility was high. These findings are in line with misestimation of uncertainty accounts of psychosis and autism spectrum disorders and indicate that aberrancies are already present at the subclinical level. |
Linda Krauze; Mara Delesa-Velina; Tatjana Pladere; Gunta Krumina Why 2D layout in 3D images matters: Evidence from visual search and eyetracking Journal Article In: Journal of Eye Movement Research, vol. 16, no. 1, pp. 1–12, 2023. @article{Krauze2023, Precise perception of three-dimensional (3D) images is crucial for a rewarding experience when using novel displays. However, the capability of the human visual system to perceive binocular disparities varies across the visual field meaning that depth perception might be affected by the two-dimensional (2D) layout of items on the screen. Nevertheless, potential difficulties in perceiving 3D images during free viewing have received only a little attention so far, limiting opportunities to enhance visual effectiveness of information presentation. The aim of this study was to elucidate how the 2D layout of items in 3D images impacts visual search and distribution of maintaining attention based on the analysis of the viewer's gaze. Participants were searching for a target which was projected one plane closer to the viewer compared to distractors on a multi-plane display. The 2D layout of items was manipulated by changing the item distance from the center of the display plane from 2° to 8°. As a result, the targets were identified correctly when the items were displayed close to the center of the display plane, however, the number of errors grew with an increase in distance. Moreover, correct responses were given more often when subjects paid more attention to targets compared to other items on the screen. However, a more balanced distribution of attention over time across all items was characteristic of the incorrectly completed trials. Thus, our results suggest that items should be displayed close to each other in a 2D layout to facilitate precise perception of 3D images and considering distribution of attention maintenance based on eye-tracking might be useful in the objective assessment of user experience for novel displays |
Anika Krause; Christian H. Poth Maintaining eye fixation relieves pressure of cognitive action control Journal Article In: iScience, vol. 26, no. 9, pp. 1–16, 2023. @article{Krause2023, Cognitive control enables humans to behave guided by their current goals and intentions. Cognitive control in one task generally suffers when humans try to engage in another task on top. However, we discovered an additional task that supports conflict resolution. In two experiments, participants performed a spatial cognitive control task. For different blocks of trials, they either received no instruction regarding eye movements or were asked to maintain the eyes fixated on a stimulus. The additional eye fixation task did not reduce task performance, but selectively ameliorated the adverse effects of cognitive conflicts on reaction times (Experiment 1). Likewise, in urgent situations, the additional task reduced performance impairments due to stimulus-driven processing overpowering cognitive control (Experiment 2). These findings suggest that maintaining eye fixation locks attentional resources that would otherwise induce spatial cognitive conflicts. This reveals an attentional disinhibition that boosts goal-directed action by relieving pressure from cognitive control. |
Frauke Kraus; Sarah Tune; Jonas Obleser; Björn Herrmann Neural α oscillations and pupil size differentially index cognitive demand under competing audiovisual task conditions Journal Article In: Journal of Neuroscience, vol. 43, no. 23, pp. 4352–4364, 2023. @article{Kraus2023a, Cognitive demand is thought to modulate two often used, but rarely combined, measures: pupil size and neural α (8–12 Hz) oscillatory power. However, it is unclear whether these two measures capture cognitive demand in a similar way under complex audiovisual-task conditions. Here we recorded pupil size and neural α power (using electroencephalography), while human participants of both sexes concurrently performed a visual multiple object-tracking task and an auditory gap detection task. Difficulties of the two tasks were manipulated independent of each other. Participants' performance decreased in accuracy and speed with increasing cognitive demand. Pupil size increased with increasing difficulty for both the auditory and the visual task. In contrast, α power showed diverging neural dynamics: parietal α power decreased with increasing difficulty in the visual task, but not with increasing difficulty in the auditory task. Furthermore, independent of task difficulty, within-participant trial-by-trial fluctuations in pupil size were negatively correlated with α power. Difficulty-induced changes in pupil size and α power, however, did not correlate, which is consistent with their different cognitive-demand sensitivities. Overall, the current study demonstrates that the dynamics of the neurophysiological indices of cognitive demand and associated effort are multifaceted and potentially modality-dependent under complex audiovisual-task conditions. |
Frauke Kraus; Jonas Obleser; Björn Herrmann Pupil size sensitivity to listening demand depends on motivational state Journal Article In: eNeuro, vol. 10, no. 12, pp. 1–12, 2023. @article{Kraus2023, Motivation plays a role when a listener needs to understand speech under acoustically demanding conditions. Previous work has demonstrated pupil-linked arousal being sensitive to both listening demands and motivational state during listening. It is less clear how motivational state affects the temporal evolution of the pupil size and its relation to subsequent behavior. We used an auditory gap detection task (N = 33) to study the joint impact of listening demand and motivational state on the pupil size response and examine its temporal evolution. Task difficulty and a listener's motivational state were orthogonally manipulated through changes in gap duration and monetary reward prospect. We show that participants' performance decreased with task difficulty, but that reward prospect enhanced performance under hard listening conditions. Pupil size increased with both increased task difficulty and higher reward prospect, and this reward prospect effect was largest under difficult listening conditions. Moreover, pupil size time courses differed between detected and missed gaps, suggesting that the pupil response indicates upcoming behavior. Larger pre-gap pupil size was further associated with faster response times on a trial-by-trial within-participant level. Our results reiterate the utility of pupil size as an objective and temporally sensitive measure in audiology. However, such assessments of cognitive resource recruitment need to consider the individual's motivational state. |
Sofia Krasovskaya; Árni Kristjánsson; W. Joseph MacInnes Microsaccade rate activity during the preparation of pro- and antisaccades Journal Article In: Attention, Perception, & Psychophysics, vol. 85, no. 7, pp. 2257–2276, 2023. @article{Krasovskaya2023, Microsaccades belong to the category of fixational micromovements and may be crucial for image stability on the retina. Eye movement paradigms typically require fixational control, but this does not eliminate all oculomotor activity. The antisaccade task requires a planned eye movement in the direction opposite of an onset, allowing separation of planning and execution. We build on previous studies of microsaccades in the antisaccade task using a combination of fixed and mixed pro- and antisaccade blocks. We hypothesized that microsaccade rates may be reduced prior to the execution of antisaccades as compared with regular saccades (prosaccades). In two experiments, we measured microsaccades in four conditions across three trial blocks: one block each of fixed prosaccade and antisaccade trials, and a mixed block where both saccade types were randomized. We anticipated that microsaccade rates would be higher prior to antisaccades than prosaccades due to the need to preemptively suppress reflexive saccades during antisaccade generation. In Experiment 1, with monocular eye tracking, there was an interaction between the effects of saccade and block type on microsaccade rates, suggesting lower rates on antisaccade trials, but only within mixed blocks. In Experiment 2, eye tracking was binocular, revealing suppressed microsaccade rates on antisaccade trials. A cluster permutation analysis of the microsaccade rate over the course of a trial did not reveal any particular critical time for this difference in microsaccade rates. Our findings suggest that microsaccade rates reflect the degree of suppression of the oculomotor system during the antisaccade task. |
Kenji W. Koyano; Elena M. Esch; Julie J. Hong; Elena N. Waidmann; Haitao Wu; David A. Leopold Progressive neuronal plasticity in primate visual cortex during stimulus familiarization Journal Article In: Science Advances, vol. 9, no. 12, pp. 1–12, 2023. @article{Koyano2023, The primate brain is equipped to learn and remember newly encountered visual stimuli such as faces and objects. In the macaque inferior temporal (IT) cortex, neurons mark the familiarity of a visual stimulus through response modification, often involving a decrease in spiking rate. Here, we investigate the emergence of this neural plasticity by longitudinally tracking IT neurons during several weeks of familiarization with face images. We found that most neurons in the anterior medial (AM) face patch exhibited a gradual decline in their late-phase visual responses to multiple stimuli. Individual neurons varied from days to weeks in their rates of plasticity, with time constants determined by the number of days of exposure rather than the cumulative number of presentations. We postulate that the sequential recruitment of neurons with experience-modified responses may provide an internal and graded measure of familiarity strength, which is a key mnemonic component of visual recognition. |