{"id":167,"date":"2017-07-17T03:54:21","date_gmt":"2017-07-17T03:54:21","guid":{"rendered":"https:\/\/www.sr-research.com\/?page_id=167"},"modified":"2026-02-23T13:47:07","modified_gmt":"2026-02-23T18:47:07","slug":"cognitive-publications","status":"publish","type":"page","link":"https:\/\/www.sr-research.com\/zh\/cognitive-publications\/","title":{"rendered":"\u8ba4\u77e5\u51fa\u7248\u7269\u4e2d\u7684EyeLink\u773c\u52a8\u4eea"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" id=\"h-cognitive-eye-tracking-publications-nbsp\">Cognitive Eye-Tracking Publications&nbsp;<\/h2>\n\n\n\n<p>All EyeLink eye tracker cognitive and perception eye tracker research publications up until 2025 (with some early 2026s) are listed below by year. You can search the eye-tracking publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please <a href=\"mailto:socialmedia@sr-research.com\"><strong>email us<\/strong><\/a>!<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n<div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><div class=\"tp_search_input\"><input name=\"tsr\" id=\"tp_search_input_field\" type=\"search\" placeholder=\"Enter search word\" value=\"\" tabindex=\"1\"\/><\/div><div class=\"teachpress_filter\"><select class=\"block\" title=\"All years\" name=\"yr\" id=\"yr\" tabindex=\"2\">\r\n                   <option value=\"\">All years<\/option>\r\n                   <option value=\"2026\" >2026<\/option><option value=\"2025\" >2025<\/option><option value=\"2024\" >2024<\/option><option value=\"2023\" >2023<\/option><option value=\"2022\" >2022<\/option><option value=\"2021\" >2021<\/option><option value=\"2020\" >2020<\/option><option value=\"2019\" >2019<\/option><option value=\"2018\" >2018<\/option><option value=\"2017\" >2017<\/option><option value=\"2016\" >2016<\/option><option value=\"2015\" >2015<\/option><option value=\"2014\" >2014<\/option><option value=\"2013\" >2013<\/option><option value=\"2012\" >2012<\/option><option value=\"2011\" >2011<\/option><option value=\"2010\" >2010<\/option><option value=\"2009\" >2009<\/option><option value=\"2008\" >2008<\/option><option value=\"2007\" >2007<\/option><option value=\"2006\" >2006<\/option><option value=\"2005\" >2005<\/option><option value=\"2004\" >2004<\/option><option value=\"2003\" >2003<\/option><option value=\"2002\" >2002<\/option><option value=\"2001\" >2001<\/option><option value=\"2000\" >2000<\/option><option value=\"1999\" >1999<\/option><option value=\"1998\" >1998<\/option><option value=\"1997\" >1997<\/option>\r\n                <\/select><div class=\"teachpress_search_button\"><input name=\"tps_button\" class=\"tp_search_button\" type=\"submit\" tabindex=\"10\" value=\"Search\"\/><\/div><\/div><\/form><div class=\"tablenav\"><div class=\"tablenav-pages\"><span class=\"displaying-num\">8709 entries<\/span> <a class=\"page-numbers button disabled\">&laquo;<\/a> <a class=\"page-numbers button disabled\">&lsaquo;<\/a> 1 of 88 <a href=\"https:\/\/www.sr-research.com\/zh\/cognitive-publications\/?limit=2&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=\" title=\"next page\" class=\"page-numbers button\">&rsaquo;<\/a> <a href=\"https:\/\/www.sr-research.com\/zh\/cognitive-publications\/?limit=88&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=\" title=\"last page\" class=\"page-numbers button\">&raquo;<\/a> <\/div><\/div><table class=\"teachpress_publication_list\"><tr>\r\n                    <td>\r\n                        <h3 class=\"tp_h3\" id=\"tp_h3_2026\">2026<\/h3>\r\n                    <\/td>\r\n                <\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Tal Ravid-Roth; Romi Livne; Ariel Berlinger; Wilfried Kunde; Baruch Eitam; Sagi Jaffe-Dax<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9895','tp_abstract')\" style=\"cursor:pointer;\">The effect of gaze contingencies on infants' looking preference<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognition, <\/span><span class=\"tp_pub_additional_volume\">vol. 270, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201318, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9895\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9895','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9895\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9895','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9895\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9895','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9895\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Ravid-Roth2026,<br \/>\r\ntitle = {The effect of gaze contingencies on infants' looking preference},<br \/>\r\nauthor = {Tal Ravid-Roth and Romi Livne and Ariel Berlinger and Wilfried Kunde and Baruch Eitam and Sagi Jaffe-Dax},<br \/>\r\ndoi = {10.1016\/j.cognition.2025.106417},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-05-01},<br \/>\r\njournal = {Cognition},<br \/>\r\nvolume = {270},<br \/>\r\npages = {1\u201318},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {Infants exhibit robust predictive capacities from birth; Most research has focused on how they process externally generated events, leaving unexplored how predictions rooted in their own actions influence attention. We asked whether the source of predictability- self-generated vs. externally structured- affects infants' looking preferences beyond overall predictability. Across two gaze-contingent eye-tracking experiments, we investigated whether infants prefer to look at stimuli whose movements are triggered by their own gaze, or at stimuli that move independently. In Experiment 1 (n = 21},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9895','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9895\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Infants exhibit robust predictive capacities from birth; Most research has focused on how they process externally generated events, leaving unexplored how predictions rooted in their own actions influence attention. We asked whether the source of predictability- self-generated vs. externally structured- affects infants' looking preferences beyond overall predictability. Across two gaze-contingent eye-tracking experiments, we investigated whether infants prefer to look at stimuli whose movements are triggered by their own gaze, or at stimuli that move independently. In Experiment 1 (n = 21<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9895','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9895\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.cognition.2025.106417\" title=\"Follow DOI:10.1016\/j.cognition.2025.106417\" target=\"_blank\">doi:10.1016\/j.cognition.2025.106417<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9895','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Yamei Zhang; Xiaojun Sun; Jing Ma<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13809','tp_abstract')\" style=\"cursor:pointer;\">Cognitive aspects of video-based learning with instructor presence depend on pedagogical approaches: A perspective from motivating styles<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Learning and Instruction, <\/span><span class=\"tp_pub_additional_volume\">vol. 102, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201311, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13809\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13809','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13809\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13809','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13809\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13809','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13809\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zhang2026e,<br \/>\r\ntitle = {Cognitive aspects of video-based learning with instructor presence depend on pedagogical approaches: A perspective from motivating styles},<br \/>\r\nauthor = {Yamei Zhang and Xiaojun Sun and Jing Ma},<br \/>\r\ndoi = {10.1016\/j.learninstruc.2025.102299},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-04-01},<br \/>\r\njournal = {Learning and Instruction},<br \/>\r\nvolume = {102},<br \/>\r\npages = {1\u201311},<br \/>\r\npublisher = {Elsevier Ltd},<br \/>\r\nabstract = {Background: Instructor presence is a critical feature that should be considered when designing video lectures. However, its influence on cognitive aspects of learning is mixed. Such inconsistencies imply the likelihood of moderators shaping the influence. Motivating styles (i.e., autonomy-supportive, controlling and neutral teaching), the most concerned pedagogical approaches, may be such a moderator. Aim: This study examines how the influence of instructor presence on the cognitive aspects of learning, including learning outcomes, attention (i.e., visual attention allocation and concentration), and extraneous cognitive load, varies with motivating styles. Sample and methods: A three (motivating styles: autonomy-supportive vs. controlling vs. neutral teaching) \u00d7 two (instructor presence: present vs. absent) between-subjects eye-tracking experiment was conducted among 181 university students. Results: While instructor presence reduced visual attention to the knowledge area regardless of the instructor's motivating style, its effects on learning outcomes (albeit only in terms of retention), concentration, and extraneous load were conditional on it. Specifically, compared with the instructor-absent condition, under autonomy-supportive teaching, instructor presence decreased retention and concentration, but did not affect extraneous load; under controlling teaching, instructor presence did not impact retention, but damaged concentration and boosted extraneous load; under neutral teaching, instructor presence promoted retention without affecting concentration or extraneous load. Conclusions: The findings imply that the facilitating effect of instructor presence as a social cue and its detrimental effect as a seductive detail can dominate one another or cancel each other out under specific motivating styles. Hence, pedagogical approaches can shape the effects of instructor presence.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13809','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13809\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Background: Instructor presence is a critical feature that should be considered when designing video lectures. However, its influence on cognitive aspects of learning is mixed. Such inconsistencies imply the likelihood of moderators shaping the influence. Motivating styles (i.e., autonomy-supportive, controlling and neutral teaching), the most concerned pedagogical approaches, may be such a moderator. Aim: This study examines how the influence of instructor presence on the cognitive aspects of learning, including learning outcomes, attention (i.e., visual attention allocation and concentration), and extraneous cognitive load, varies with motivating styles. Sample and methods: A three (motivating styles: autonomy-supportive vs. controlling vs. neutral teaching) \u00d7 two (instructor presence: present vs. absent) between-subjects eye-tracking experiment was conducted among 181 university students. Results: While instructor presence reduced visual attention to the knowledge area regardless of the instructor's motivating style, its effects on learning outcomes (albeit only in terms of retention), concentration, and extraneous load were conditional on it. Specifically, compared with the instructor-absent condition, under autonomy-supportive teaching, instructor presence decreased retention and concentration, but did not affect extraneous load; under controlling teaching, instructor presence did not impact retention, but damaged concentration and boosted extraneous load; under neutral teaching, instructor presence promoted retention without affecting concentration or extraneous load. Conclusions: The findings imply that the facilitating effect of instructor presence as a social cue and its detrimental effect as a seductive detail can dominate one another or cancel each other out under specific motivating styles. Hence, pedagogical approaches can shape the effects of instructor presence.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13809','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13809\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.learninstruc.2025.102299\" title=\"Follow DOI:10.1016\/j.learninstruc.2025.102299\" target=\"_blank\">doi:10.1016\/j.learninstruc.2025.102299<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13809','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Kaitlyn N. Drennan; Nicholas Gaspelin<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('2968','tp_abstract')\" style=\"cursor:pointer;\">What can a half-million saccades tell us about distractor suppression?<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognition, <\/span><span class=\"tp_pub_additional_volume\">vol. 269, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201314, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_2968\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2968','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_2968\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2968','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_2968\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2968','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_2968\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Drennan2026,<br \/>\r\ntitle = {What can a half-million saccades tell us about distractor suppression?},<br \/>\r\nauthor = {Kaitlyn N. Drennan and Nicholas Gaspelin},<br \/>\r\ndoi = {10.1016\/j.cognition.2025.106397},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-04-01},<br \/>\r\njournal = {Cognition},<br \/>\r\nvolume = {269},<br \/>\r\npages = {1\u201314},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {Salient distractions in the environment compete for attention and have the potential to interfere with our goals. An abundance of research has therefore examined how we learn to prevent distraction by salient stimuli. There is growing consensus that salient stimuli can be suppressed to mitigate distraction. However, many questions about distractor suppression have been difficult to resolve in typical studies that use small sample sizes. The current study is a pooled analysis of several previous eye-tracking studies (N = 354) which resulted in a large data set of more than a half-million eye movements. This large data set was used to uncover new findings that improve our understanding of the attentional processes involved in distractor suppression. We also evaluated several new findings related to how attentional suppression is learned and is influenced by selection history. Altogether, these findings highlight the need for a hybrid model of attention that includes both bottom-up and top-down components. Moreover, this large publicly available dataset can be used by future research to investigate other questions related to attentional capture and distractor suppression.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2968','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_2968\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Salient distractions in the environment compete for attention and have the potential to interfere with our goals. An abundance of research has therefore examined how we learn to prevent distraction by salient stimuli. There is growing consensus that salient stimuli can be suppressed to mitigate distraction. However, many questions about distractor suppression have been difficult to resolve in typical studies that use small sample sizes. The current study is a pooled analysis of several previous eye-tracking studies (N = 354) which resulted in a large data set of more than a half-million eye movements. This large data set was used to uncover new findings that improve our understanding of the attentional processes involved in distractor suppression. We also evaluated several new findings related to how attentional suppression is learned and is influenced by selection history. Altogether, these findings highlight the need for a hybrid model of attention that includes both bottom-up and top-down components. Moreover, this large publicly available dataset can be used by future research to investigate other questions related to attentional capture and distractor suppression.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2968','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_2968\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.cognition.2025.106397\" title=\"Follow DOI:10.1016\/j.cognition.2025.106397\" target=\"_blank\">doi:10.1016\/j.cognition.2025.106397<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2968','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Ting Zhang; Shujia Zhang; Yi Jiang<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13790','tp_abstract')\" style=\"cursor:pointer;\">Automatic pupillary responses to pain perception in adults and children: The influence of race and autistic traits<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognition, <\/span><span class=\"tp_pub_additional_volume\">vol. 268, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20139, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13790\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13790','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13790\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13790','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13790\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13790','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13790\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zhang2026d,<br \/>\r\ntitle = {Automatic pupillary responses to pain perception in adults and children: The influence of race and autistic traits},<br \/>\r\nauthor = {Ting Zhang and Shujia Zhang and Yi Jiang},<br \/>\r\ndoi = {10.1016\/j.cognition.2025.106384},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-03-01},<br \/>\r\njournal = {Cognition},<br \/>\r\nvolume = {268},<br \/>\r\npages = {1\u20139},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {The ability to understand and share others' emotional states (e.g., feeling of pain) plays a fundamental role in survival and prosocial behavior. The current study utilized pupillometry to assess automatic psychophysiological responses to others' painful facial expressions in both adults and children (N = 72). Results revealed that pupil size significantly increased when perceiving painful versus neutral expressions, independent of low-level visual features. Notably, both adults and children exhibited a racial in-group bias, with pupil dilation effects observed only for same-race painful faces. Furthermore, individuals' Autism Spectrum Quotient scores were negatively correlated with pupil dilation effects toward painful expressions of same-race faces. These findings suggest that pupillary responses might reflect automatic empathic arousal to others' pain and are modulated by racial group membership and autistic traits, providing a potential physiological indicator, at least at the group level, for probing affective resonance in children or individuals with socio-cognitive disorders (e.g., autism spectrum disorder).},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13790','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13790\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The ability to understand and share others' emotional states (e.g., feeling of pain) plays a fundamental role in survival and prosocial behavior. The current study utilized pupillometry to assess automatic psychophysiological responses to others' painful facial expressions in both adults and children (N = 72). Results revealed that pupil size significantly increased when perceiving painful versus neutral expressions, independent of low-level visual features. Notably, both adults and children exhibited a racial in-group bias, with pupil dilation effects observed only for same-race painful faces. Furthermore, individuals' Autism Spectrum Quotient scores were negatively correlated with pupil dilation effects toward painful expressions of same-race faces. These findings suggest that pupillary responses might reflect automatic empathic arousal to others' pain and are modulated by racial group membership and autistic traits, providing a potential physiological indicator, at least at the group level, for probing affective resonance in children or individuals with socio-cognitive disorders (e.g., autism spectrum disorder).<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13790','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13790\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.cognition.2025.106384\" title=\"Follow DOI:10.1016\/j.cognition.2025.106384\" target=\"_blank\">doi:10.1016\/j.cognition.2025.106384<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13790','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Xiaozhi Yang; Elizabeth E. Riggs; Jason C. Coronel; Ian Krajbich<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13452','tp_abstract')\" style=\"cursor:pointer;\">Issue importance amplifies the effect of gaze on voting decisions<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognition, <\/span><span class=\"tp_pub_additional_volume\">vol. 268, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201312, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13452\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13452','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13452\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13452','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13452\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13452','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13452\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Yang2026,<br \/>\r\ntitle = {Issue importance amplifies the effect of gaze on voting decisions},<br \/>\r\nauthor = {Xiaozhi Yang and Elizabeth E. Riggs and Jason C. Coronel and Ian Krajbich},<br \/>\r\ndoi = {10.1016\/j.cognition.2025.106376},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-03-01},<br \/>\r\njournal = {Cognition},<br \/>\r\nvolume = {268},<br \/>\r\npages = {1\u201312},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {There are many factors that can influence a voter's decision in the ballot booth but not all of them are policy related. One non-policy factor that may influence voters is the tendency to choose options that attract attention. Here, we investigate this possibility in two proof-of-concept laboratory studies with people choosing between proposed laws. We find that people are slower to vote when their party is split over an issue, and that they tend to vote for laws that they look at more. Moreover, this gaze effect is stronger for more important issues. We also find that we can increase the probability that someone will vote for one of two laws by getting them to look at that option first. Our work harnesses the power of sequential sampling models to explain the relationship between gaze and vote choice. We find support for a goal-based model where overt attention amplifies information supporting a particular law. This model explains why gaze has a stronger effect on choice for more important issues. Our findings indicate that some voting decisions are not predetermined and instead rely on an on-the-spot evaluation. As a result, these decisions can be swayed by attentional manipulations. Thus, visual attention may serve as a unifying framework for understanding different biases that occur in the voting booth, such as ballot-order and candidate-name-familiarity effects.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13452','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13452\" style=\"display:none;\"><div class=\"tp_abstract_entry\">There are many factors that can influence a voter's decision in the ballot booth but not all of them are policy related. One non-policy factor that may influence voters is the tendency to choose options that attract attention. Here, we investigate this possibility in two proof-of-concept laboratory studies with people choosing between proposed laws. We find that people are slower to vote when their party is split over an issue, and that they tend to vote for laws that they look at more. Moreover, this gaze effect is stronger for more important issues. We also find that we can increase the probability that someone will vote for one of two laws by getting them to look at that option first. Our work harnesses the power of sequential sampling models to explain the relationship between gaze and vote choice. We find support for a goal-based model where overt attention amplifies information supporting a particular law. This model explains why gaze has a stronger effect on choice for more important issues. Our findings indicate that some voting decisions are not predetermined and instead rely on an on-the-spot evaluation. As a result, these decisions can be swayed by attentional manipulations. Thus, visual attention may serve as a unifying framework for understanding different biases that occur in the voting booth, such as ballot-order and candidate-name-familiarity effects.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13452','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13452\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.cognition.2025.106376\" title=\"Follow DOI:10.1016\/j.cognition.2025.106376\" target=\"_blank\">doi:10.1016\/j.cognition.2025.106376<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13452','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Yunfei Shang; Ke Liu; Qing Feng<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10854','tp_abstract')\" style=\"cursor:pointer;\">The influences of security and context on attentional bias toward emotional faces: Evidence from eye movements<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Acta Psychologica, <\/span><span class=\"tp_pub_additional_volume\">vol. 263, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20138, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10854\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10854','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10854\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10854','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10854\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10854','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10854\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Shang2026,<br \/>\r\ntitle = {The influences of security and context on attentional bias toward emotional faces: Evidence from eye movements},<br \/>\r\nauthor = {Yunfei Shang and Ke Liu and Qing Feng},<br \/>\r\ndoi = {10.1016\/j.actpsy.2025.106141},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-03-01},<br \/>\r\njournal = {Acta Psychologica},<br \/>\r\nvolume = {263},<br \/>\r\npages = {1\u20138},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {This study employed a dot-probe paradigm to investigate attentional biases toward emotional faces in individuals with high versus low levels of security across general and threat contexts, using eye-tracking technology. Participants were screened into high- and low-security groups based on validated security scales. Threat contexts were established using images from the International Affective Picture System (IAPS). Results revealed that: (1) Both high- and low-security individuals exhibited attentional biases toward emotional faces compared to neutral faces. (2) Security levels modulated attention to emotional faces: high-security individuals displayed greater bias toward happy faces, while low-security individuals showed enhanced bias toward angry faces, consistent with the schema-congruence hypothesis. (3) Reaction times accelerated under threat conditions for all participants, and threat contexts amplified attentional bias toward angry faces in high-security individuals. These findings highlight the interplay between intrinsic security and external contexts in shaping attentional processing of emotional stimuli.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10854','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10854\" style=\"display:none;\"><div class=\"tp_abstract_entry\">This study employed a dot-probe paradigm to investigate attentional biases toward emotional faces in individuals with high versus low levels of security across general and threat contexts, using eye-tracking technology. Participants were screened into high- and low-security groups based on validated security scales. Threat contexts were established using images from the International Affective Picture System (IAPS). Results revealed that: (1) Both high- and low-security individuals exhibited attentional biases toward emotional faces compared to neutral faces. (2) Security levels modulated attention to emotional faces: high-security individuals displayed greater bias toward happy faces, while low-security individuals showed enhanced bias toward angry faces, consistent with the schema-congruence hypothesis. (3) Reaction times accelerated under threat conditions for all participants, and threat contexts amplified attentional bias toward angry faces in high-security individuals. These findings highlight the interplay between intrinsic security and external contexts in shaping attentional processing of emotional stimuli.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10854','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10854\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.actpsy.2025.106141\" title=\"Follow DOI:10.1016\/j.actpsy.2025.106141\" target=\"_blank\">doi:10.1016\/j.actpsy.2025.106141<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10854','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Ilanit Hochmitz; Yaffa Yeshurun; Amit Yashar<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('4924','tp_abstract')\" style=\"cursor:pointer;\">Temporal dynamics of integration and individuation: Insights from temporal averaging and crowding<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognition, <\/span><span class=\"tp_pub_additional_volume\">vol. 268, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201312, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_4924\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4924','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_4924\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4924','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_4924\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4924','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_4924\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Hochmitz2026,<br \/>\r\ntitle = {Temporal dynamics of integration and individuation: Insights from temporal averaging and crowding},<br \/>\r\nauthor = {Ilanit Hochmitz and Yaffa Yeshurun and Amit Yashar},<br \/>\r\ndoi = {10.1016\/j.cognition.2025.106374},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-03-01},<br \/>\r\njournal = {Cognition},<br \/>\r\nvolume = {268},<br \/>\r\npages = {1\u201312},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {Individuating a single item presented within a continuous sequence of items requires segregating its signal from that of the other items. In contrast, representing a global aspect of the sequence, such as its average orientation, involves integration of information across time. Individuation and integration allow us to focus on individual events while maintaining an overall perception of our environment. To examine the relations between temporal averaging and individuation, we measured orientation averaging over short and long timescales using the same stimuli and orientation-estimation procedure previously used to measure individuation. Participants reported the average orientation of a sequence of three oriented items separated by either short (SOAs&lt;150 ms) or long intervals (SOAs&gt;150 ms). Analysis of the error distribution and mixture-modeling revealed distinct patterns of results for the different tasks and timescales, but also some similarities, particularly for the short timescale. In this timescale, the relative contribution of each individual item to the final response was similar across tasks, indicating the involvement of low-level factors operating regardless of the task. With the long timescale, the two tasks showed dissociable pattern across all performance aspects, except guessing rate, indicating that long-scale individuation and averaging engage mainly higher-level, task-related processes. Importantly, regardless of timescale, estimation errors in these tasks were best described by different models: in integration they primarily reflected unequal weighting of the averaged items, whereas in individuation they reflected imprecise target encoding with occasional misreports of distractors. Together, the findings reveal dissociable dynamics for integration and individuation.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4924','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_4924\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Individuating a single item presented within a continuous sequence of items requires segregating its signal from that of the other items. In contrast, representing a global aspect of the sequence, such as its average orientation, involves integration of information across time. Individuation and integration allow us to focus on individual events while maintaining an overall perception of our environment. To examine the relations between temporal averaging and individuation, we measured orientation averaging over short and long timescales using the same stimuli and orientation-estimation procedure previously used to measure individuation. Participants reported the average orientation of a sequence of three oriented items separated by either short (SOAs&lt;150 ms) or long intervals (SOAs&gt;150 ms). Analysis of the error distribution and mixture-modeling revealed distinct patterns of results for the different tasks and timescales, but also some similarities, particularly for the short timescale. In this timescale, the relative contribution of each individual item to the final response was similar across tasks, indicating the involvement of low-level factors operating regardless of the task. With the long timescale, the two tasks showed dissociable pattern across all performance aspects, except guessing rate, indicating that long-scale individuation and averaging engage mainly higher-level, task-related processes. Importantly, regardless of timescale, estimation errors in these tasks were best described by different models: in integration they primarily reflected unequal weighting of the averaged items, whereas in individuation they reflected imprecise target encoding with occasional misreports of distractors. Together, the findings reveal dissociable dynamics for integration and individuation.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4924','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_4924\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.cognition.2025.106374\" title=\"Follow DOI:10.1016\/j.cognition.2025.106374\" target=\"_blank\">doi:10.1016\/j.cognition.2025.106374<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4924','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">\u00c2ngela Gomes Tomaz; Adrien Chopin; Noelia Gabriela Alcalde; Dennis M. Levi; Preeti Verghese<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('4094','tp_abstract')\" style=\"cursor:pointer;\">The best stereoacuity is rarely at the fovea<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Vision Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 240, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201313, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_4094\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4094','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_4094\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4094','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_4094\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4094','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_4094\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{GomesTomaz2026,<br \/>\r\ntitle = {The best stereoacuity is rarely at the fovea},<br \/>\r\nauthor = {\u00c2ngela Gomes Tomaz and Adrien Chopin and Noelia Gabriela Alcalde and Dennis M. Levi and Preeti Verghese},<br \/>\r\ndoi = {10.1016\/j.visres.2025.108748},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-03-01},<br \/>\r\njournal = {Vision Research},<br \/>\r\nvolume = {240},<br \/>\r\npages = {1\u201313},<br \/>\r\nabstract = {Stereoacuity, the ability to perceive depth from binocular disparity, is traditionally considered to be best at the fovea in typical human vision, and to decline with eccentricity. Previous studies have shown that when stereopsis is present in amblyopia, it is often coarse and comparable to stereoacuity associated with the pe- ripheral retina in neurotypical controls, suggesting that it might be mediated by a non-foveal locus. Here we measured stereoacuity as a function of eccentricity in participants with amblyopia as well as controls with no history of abnormal visual development. We measured stereoacuity using random dot stereograms and targets that scaled with eccentricity, testing the fovea, and eccentricities of 2.5\u25e6, 5\u25e6, and 10\u25e6 along the horizontal and vertical meridians. For 87.5% (7\/8) of amblyopic participants, the locus of best stereoacuity was non-foveal. Surprisingly, 75% of control participants (15\/20) also exhibited their best stereoacuity at non-foveal locations, with only 5 controls showing foveal superiority. Using stimulus parameters modified to improve foveal performance, we repeated measurements on a subset of controls whose best stereoacuity was non-foveal, but the best locus only shifted to the fovea in one participant. Stereoacuity measured at the experimentally determined \u201cbest locus\u201d correlated well with standard clinical stereoacuity tests. These findings challenge the conventional view of universal foveal dominance for stereopsis, suggesting that the fovea is not invariably the site of best stereoscopic sensitivity, even in many normally sighted individuals. This has implications for understanding binocular vision in amblyopic and normal vision, and for interpreting clinical stereo tests.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4094','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_4094\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Stereoacuity, the ability to perceive depth from binocular disparity, is traditionally considered to be best at the fovea in typical human vision, and to decline with eccentricity. Previous studies have shown that when stereopsis is present in amblyopia, it is often coarse and comparable to stereoacuity associated with the pe- ripheral retina in neurotypical controls, suggesting that it might be mediated by a non-foveal locus. Here we measured stereoacuity as a function of eccentricity in participants with amblyopia as well as controls with no history of abnormal visual development. We measured stereoacuity using random dot stereograms and targets that scaled with eccentricity, testing the fovea, and eccentricities of 2.5\u25e6, 5\u25e6, and 10\u25e6 along the horizontal and vertical meridians. For 87.5% (7\/8) of amblyopic participants, the locus of best stereoacuity was non-foveal. Surprisingly, 75% of control participants (15\/20) also exhibited their best stereoacuity at non-foveal locations, with only 5 controls showing foveal superiority. Using stimulus parameters modified to improve foveal performance, we repeated measurements on a subset of controls whose best stereoacuity was non-foveal, but the best locus only shifted to the fovea in one participant. Stereoacuity measured at the experimentally determined \u201cbest locus\u201d correlated well with standard clinical stereoacuity tests. These findings challenge the conventional view of universal foveal dominance for stereopsis, suggesting that the fovea is not invariably the site of best stereoscopic sensitivity, even in many normally sighted individuals. This has implications for understanding binocular vision in amblyopic and normal vision, and for interpreting clinical stereo tests.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4094','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_4094\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.visres.2025.108748\" title=\"Follow DOI:10.1016\/j.visres.2025.108748\" target=\"_blank\">doi:10.1016\/j.visres.2025.108748<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4094','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Ryan M. Barker; Michael J. Armson; Nicholas B. Diamond; Zhong Xu Liu; Yushu Wang; Jennifer D. Ryan; Brian Levine<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('648','tp_abstract')\" style=\"cursor:pointer;\">Remembrance with gazes passed: Eye movements precede continuous recall of episodic details of real-life events<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognition, <\/span><span class=\"tp_pub_additional_volume\">vol. 268, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20136, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_648\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('648','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_648\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('648','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_648\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('648','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_648\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Barker2026,<br \/>\r\ntitle = {Remembrance with gazes passed: Eye movements precede continuous recall of episodic details of real-life events},<br \/>\r\nauthor = {Ryan M. Barker and Michael J. Armson and Nicholas B. Diamond and Zhong Xu Liu and Yushu Wang and Jennifer D. Ryan and Brian Levine},<br \/>\r\ndoi = {10.1016\/j.cognition.2025.106380},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-03-01},<br \/>\r\njournal = {Cognition},<br \/>\r\nvolume = {268},<br \/>\r\npages = {1\u20136},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {Autobiographical memory entails the reconstructing of the visual features of past events. While eye movements are associated with vivid autobiographical recollection, this research has yet to capitalize on the high temporal resolution of eye-tracking data. We aligned eye movement data with participants' extemporaneous free recall of a verified real-life event, allowing us to assess the temporal correspondence of saccades to production of episodic and non-episodic narrative content at the millisecond level. Episodic autobiographical details were preceded by an increase in saccade frequency and followed by a reduction in saccades prior to the next detail. There was no such effect observed for non-episodic details. Oculomotor responses in the temporal window preceding freely-recalled details may facilitate recollection by reinstating spatiotemporal context, or they may reflect post-retrieval processes\u2014or a combination of both\u2014in cyclical sensory-motor-mnemonic interactions that promote vivid recall.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('648','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_648\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Autobiographical memory entails the reconstructing of the visual features of past events. While eye movements are associated with vivid autobiographical recollection, this research has yet to capitalize on the high temporal resolution of eye-tracking data. We aligned eye movement data with participants' extemporaneous free recall of a verified real-life event, allowing us to assess the temporal correspondence of saccades to production of episodic and non-episodic narrative content at the millisecond level. Episodic autobiographical details were preceded by an increase in saccade frequency and followed by a reduction in saccades prior to the next detail. There was no such effect observed for non-episodic details. Oculomotor responses in the temporal window preceding freely-recalled details may facilitate recollection by reinstating spatiotemporal context, or they may reflect post-retrieval processes\u2014or a combination of both\u2014in cyclical sensory-motor-mnemonic interactions that promote vivid recall.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('648','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_648\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.cognition.2025.106380\" title=\"Follow DOI:10.1016\/j.cognition.2025.106380\" target=\"_blank\">doi:10.1016\/j.cognition.2025.106380<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('648','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Fangfang Zhu; Yifen Liu; Mengyuan Wang; Jiumin Yang; Zhongling Pi; Zhiqiang Ma<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13960','tp_abstract')\" style=\"cursor:pointer;\">When do teachers' pleasant expressions in video lectures facilitate learning? The role of emotional learning materials and auditory emotions<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Computer Assisted Learning, <\/span><span class=\"tp_pub_additional_volume\">vol. 42, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13960\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13960','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13960\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13960','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13960\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13960','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13960\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zhu2026,<br \/>\r\ntitle = {When do teachers' pleasant expressions in video lectures facilitate learning? The role of emotional learning materials and auditory emotions},<br \/>\r\nauthor = {Fangfang Zhu and Yifen Liu and Mengyuan Wang and Jiumin Yang and Zhongling Pi and Zhiqiang Ma},<br \/>\r\ndoi = {10.1111\/jcal.70155},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-02-01},<br \/>\r\njournal = {Journal of Computer Assisted Learning},<br \/>\r\nvolume = {42},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201316},<br \/>\r\npublisher = {John Wiley and Sons Inc},<br \/>\r\nabstract = {Background: Emotional cues in video lectures have demonstrated complex effects on learning, particularly regarding teachers' facial expressions. However, these effects remain inconclusive, necessitating further exploration of potential factors to enhance learning. Objectives: This study examined how three forms of emotional design\u2014learning materials, teachers' facial expressions and teachers' auditory emotions, individually and jointly influence learners' emotional responses, cognitive processing and learning outcomes in video-based instruction. Methods: Across two experiments, we investigated the independent and interactive effects of teachers' facial expressions, the emotional design of learning materials and teachers' auditory emotion on students' emotions, motivation, attention, cognitive load and learning outcomes. Experiment 1 examined the interaction between teachers' facial expressions and emotionally designed learning materials, while Experiment 2 built on these findings to test whether congruent positive facial and auditory cues further enhance students' emotional, motivational and cognitive engagement. Results: In Experiment 1, when learning materials were neutrally designed, teachers' pleasant facial expressions reduced extraneous cognitive load and improved learning outcomes. Experiment 2 showed that pairing pleasant facial expressions with pleasant auditory emotion elicited more positive emotions, higher motivation, increased germane load and better learning outcomes. Eye-tracking analyses indicated that this emotional congruence decreased attentional distraction, highlighting the synergistic benefits of combining visual and auditory emotional cues. Conclusions: The study identifies the synergistic effects of various emotional design elements in video lectures on students' learning and contributes to theories of emotional design and cognitive processing in multimedia learning contexts. It also offers practical insights for educators on optimising emotional cues in video-based learning environments.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13960','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13960\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Background: Emotional cues in video lectures have demonstrated complex effects on learning, particularly regarding teachers' facial expressions. However, these effects remain inconclusive, necessitating further exploration of potential factors to enhance learning. Objectives: This study examined how three forms of emotional design\u2014learning materials, teachers' facial expressions and teachers' auditory emotions, individually and jointly influence learners' emotional responses, cognitive processing and learning outcomes in video-based instruction. Methods: Across two experiments, we investigated the independent and interactive effects of teachers' facial expressions, the emotional design of learning materials and teachers' auditory emotion on students' emotions, motivation, attention, cognitive load and learning outcomes. Experiment 1 examined the interaction between teachers' facial expressions and emotionally designed learning materials, while Experiment 2 built on these findings to test whether congruent positive facial and auditory cues further enhance students' emotional, motivational and cognitive engagement. Results: In Experiment 1, when learning materials were neutrally designed, teachers' pleasant facial expressions reduced extraneous cognitive load and improved learning outcomes. Experiment 2 showed that pairing pleasant facial expressions with pleasant auditory emotion elicited more positive emotions, higher motivation, increased germane load and better learning outcomes. Eye-tracking analyses indicated that this emotional congruence decreased attentional distraction, highlighting the synergistic benefits of combining visual and auditory emotional cues. Conclusions: The study identifies the synergistic effects of various emotional design elements in video lectures on students' learning and contributes to theories of emotional design and cognitive processing in multimedia learning contexts. It also offers practical insights for educators on optimising emotional cues in video-based learning environments.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13960','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13960\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1111\/jcal.70155\" title=\"Follow DOI:10.1111\/jcal.70155\" target=\"_blank\">doi:10.1111\/jcal.70155<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13960','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Tianyu Zhang; Yongchun Cai<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13786','tp_abstract')\" style=\"cursor:pointer;\">Shared mechanisms of presaccadic and exogenous attention in modulating visual perception of contrast<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognition, <\/span><span class=\"tp_pub_additional_volume\">vol. 267, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201313, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13786\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13786','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13786\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13786','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13786\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13786','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13786\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zhang2026c,<br \/>\r\ntitle = {Shared mechanisms of presaccadic and exogenous attention in modulating visual perception of contrast},<br \/>\r\nauthor = {Tianyu Zhang and Yongchun Cai},<br \/>\r\ndoi = {10.1016\/j.cognition.2025.106343},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-02-01},<br \/>\r\njournal = {Cognition},<br \/>\r\nvolume = {267},<br \/>\r\npages = {1\u201313},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {Different types of attention alter subjective visual perception in fundamentally distinct ways. Previous studies have focused on covert attention without concurrent eye movements, revealing that covert exogenous (involuntary) attention enhances contrast appearance of low-contrast stimuli while diminishing that of high-contrast stimuli, whereas covert endogenous (voluntary) attention uniformly enhances contrast appearance. However, the attentional effect preceding saccadic eye movements, a critical component of natural vision, remain understudied. Here, we found that when participants voluntarily initiated saccades, presaccadic attention enhanced the appearance of low-contrast stimuli while attenuating the appearance of high-contrast stimuli (Experiment 1},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13786','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13786\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Different types of attention alter subjective visual perception in fundamentally distinct ways. Previous studies have focused on covert attention without concurrent eye movements, revealing that covert exogenous (involuntary) attention enhances contrast appearance of low-contrast stimuli while diminishing that of high-contrast stimuli, whereas covert endogenous (voluntary) attention uniformly enhances contrast appearance. However, the attentional effect preceding saccadic eye movements, a critical component of natural vision, remain understudied. Here, we found that when participants voluntarily initiated saccades, presaccadic attention enhanced the appearance of low-contrast stimuli while attenuating the appearance of high-contrast stimuli (Experiment 1<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13786','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13786\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.cognition.2025.106343\" title=\"Follow DOI:10.1016\/j.cognition.2025.106343\" target=\"_blank\">doi:10.1016\/j.cognition.2025.106343<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13786','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">G\u00fcven Kandemir; Christian N. L. Olivers<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('5801','tp_abstract')\" style=\"cursor:pointer;\">Serial dependence is stronger for peripheral than for central vision<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Attention, Perception, &amp; Psychophysics, <\/span><span class=\"tp_pub_additional_volume\">vol. 88, <\/span><span class=\"tp_pub_additional_number\">no. 2, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201318, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_5801\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5801','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_5801\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5801','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_5801\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5801','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_5801\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Kandemir2026,<br \/>\r\ntitle = {Serial dependence is stronger for peripheral than for central vision},<br \/>\r\nauthor = {G\u00fcven Kandemir and Christian N. L. Olivers},<br \/>\r\ndoi = {10.3758\/s13414-025-03208-1},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-02-01},<br \/>\r\njournal = {Attention, Perception, & Psychophysics},<br \/>\r\nvolume = {88},<br \/>\r\nnumber = {2},<br \/>\r\npages = {1\u201318},<br \/>\r\nabstract = {Serial dependence in vision refers to the fact that perceptual judgements are biased by earlier experiences, and has been thought to reduce sensory uncertainty and sustain perceptual continuity over time and space. While vision changes with eccentricity, little is known about if and how serial dependence differs in the periphery relative to fovea. Here we aimed to reduce this gap by comparing serial dependence for centrally and peripherally presented stimuli. Experiment 1 presents a reanalysis of an existing dataset from an earlier working memory task requiring the memorization of differently oriented gratings, presented either centrally or at 15\u00b0 eccentricity. Experiment 2 also varied pre-knowledge of the item's location through spatial cueing. Experiment 3 replicated Experiment 1 but with lower contrast levels and equating the probabilities of central and peripheral stimuli. Across all experiments we observed an attractive bias towards the orientation of the preceding trial at all locations. Crucially, this bias was always larger in the periphery relative to the central position, and it was mainly the current item's location that drove this effect, rather than the previous item's location. Pre-knowledge of item location failed to influence the eccentricity effect serial dependence, nor did reduced contrast or differential probabilities change the conclusions. Our results thus demonstrate that serial dependence is not equal across eccentricity. The data and the scripts are available at: https:\/\/ osf. io\/ v56hn\/? view_ only= 6d4d5 bba49 3b4bc 788c3 eed8d ecd83 70 WABBLE},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5801','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_5801\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Serial dependence in vision refers to the fact that perceptual judgements are biased by earlier experiences, and has been thought to reduce sensory uncertainty and sustain perceptual continuity over time and space. While vision changes with eccentricity, little is known about if and how serial dependence differs in the periphery relative to fovea. Here we aimed to reduce this gap by comparing serial dependence for centrally and peripherally presented stimuli. Experiment 1 presents a reanalysis of an existing dataset from an earlier working memory task requiring the memorization of differently oriented gratings, presented either centrally or at 15\u00b0 eccentricity. Experiment 2 also varied pre-knowledge of the item's location through spatial cueing. Experiment 3 replicated Experiment 1 but with lower contrast levels and equating the probabilities of central and peripheral stimuli. Across all experiments we observed an attractive bias towards the orientation of the preceding trial at all locations. Crucially, this bias was always larger in the periphery relative to the central position, and it was mainly the current item's location that drove this effect, rather than the previous item's location. Pre-knowledge of item location failed to influence the eccentricity effect serial dependence, nor did reduced contrast or differential probabilities change the conclusions. Our results thus demonstrate that serial dependence is not equal across eccentricity. The data and the scripts are available at: https:\/\/ osf. io\/ v56hn\/? view_ only= 6d4d5 bba49 3b4bc 788c3 eed8d ecd83 70 WABBLE<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5801','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_5801\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3758\/s13414-025-03208-1\" title=\"Follow DOI:10.3758\/s13414-025-03208-1\" target=\"_blank\">doi:10.3758\/s13414-025-03208-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5801','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Alexia Galati; Rick Dale; Camila Alviar; Moreno I. Coco<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('3795','tp_abstract')\" style=\"cursor:pointer;\">Task goals constrain the alignment in eye-movements and speech during interpersonal coordination<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Memory and Language, <\/span><span class=\"tp_pub_additional_volume\">vol. 146, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201318, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_3795\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3795','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_3795\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3795','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_3795\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3795','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_3795\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Galati2026,<br \/>\r\ntitle = {Task goals constrain the alignment in eye-movements and speech during interpersonal coordination},<br \/>\r\nauthor = {Alexia Galati and Rick Dale and Camila Alviar and Moreno I. Coco},<br \/>\r\ndoi = {10.1016\/j.jml.2025.104691},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-02-01},<br \/>\r\njournal = {Journal of Memory and Language},<br \/>\r\nvolume = {146},<br \/>\r\npages = {1\u201318},<br \/>\r\npublisher = {Academic Press Inc.},<br \/>\r\nabstract = {Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; Pickering & Garrod, 2004), support this view by building on tasks that require monitoring a partner's perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a \u201cdivide and conquer\u201d strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners' eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3795','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_3795\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Collaborative task performance is assumed to benefit from interpersonal coordination between interacting individuals. Prominent views of language use and social behavior, including the Interactive Alignment Model (IAM; Pickering &amp; Garrod, 2004), support this view by building on tasks that require monitoring a partner's perspective (e.g., in route planning), proposing that behavioral alignment enables conceptual convergence. However, the role of alignment in tasks requiring complementarity (e.g., a \u201cdivide and conquer\u201d strategy during joint visual search) remains underexplored. We address this gap by manipulating task goals (route planning vs. visual search) as forty dyads completed ten trials involving subway maps while their eye movements and speech were co-registered. We used Cross Recurrence Quantification Analysis (CRQA) to examine the temporal relationships between partners' eye fixations and word sequences, generating measures that reveal similarity and dynamic coupling. Dyads exhibited more gaze alignment in route planning than visual search across a range of CRQA metrics. Gaze alignment also varied across the trial and related differently to accuracy: in visual search, greater alignment late in the trial predicted better performance. In speech, route planning prompted longer and more entropic word sequences, but lower overall recurrence than visual search. This finding suggests that the two modalities organize in a compensatory fashion to support distinct task demands. These results support a theoretical framework more general than IAM, in which interactive alignment emerges as a consequence of dynamic adaptation to task goals. Overall, task goals constrain how people coordinate behavior and offer insights into how collaborating partners distribute their multimodal contributions.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3795','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_3795\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.jml.2025.104691\" title=\"Follow DOI:10.1016\/j.jml.2025.104691\" target=\"_blank\">doi:10.1016\/j.jml.2025.104691<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3795','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Zhanna Chuikova; Anna Izmalkova; Andriy Myachykov; Anastasiia Liashenko; Yury Shtyrov; Marie Arsalidou<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('2055','tp_abstract')\" style=\"cursor:pointer;\">Interplay between switching, inhibition, and mental attention: An exploratory eye-tracking study<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Psychological Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 90, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201319, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_2055\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2055','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_2055\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2055','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_2055\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2055','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_2055\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Chuikova2026,<br \/>\r\ntitle = {Interplay between switching, inhibition, and mental attention: An exploratory eye-tracking study},<br \/>\r\nauthor = {Zhanna Chuikova and Anna Izmalkova and Andriy Myachykov and Anastasiia Liashenko and Yury Shtyrov and Marie Arsalidou},<br \/>\r\ndoi = {10.1007\/s00426-025-02229-7},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-02-01},<br \/>\r\njournal = {Psychological Research},<br \/>\r\nvolume = {90},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201319},<br \/>\r\nabstract = {Cognitive flexibility (CF) allows individuals to adapt their behavior to changing environmental demands. As task complexity increases, CF may substantially impact performance by facilitating a shift towards more efficient information processing strategies. However, its role in tasks with high cognitive demands remains largely unexplored. Furthermore, while CF is associated with inhibitory control and working memory functions, their precise relationship under task demands is not yet fully understood. To address this gap, we investigated how CF and inhibition metrics are associated with different levels of mental attentional demand (Md), as well as \u0421F. Additionally, we explored differences in eye-movement indices associated with high and low CF in tasks with varied levels of Md. Analyzing data from 42 young participants performing CF, inhibition, and mental attention tasks with eye movement recording for the last task, we found that multidimensional switching (i.e., switching between three rules) correlated with mental attentional capacity, whereas two-dimensional switching (i.e., switching between two rules) correlated with inhibitory control. Individuals with low and high switching scores differed in task performance and eye-movement patterns of mental attentional demand (i.e., difficulty). Specifically, those with high efficiency in multidimensional switching exhibited superior performance across all levels of mental attentional demand. Further, high-efficiency performers employed eye-movement patterns characterized by an increased number of fixations, shorter fixation durations, and decreased blink rates, with significant differences observed at higher levels of mental-attention demand. Our findings offer new insights into psychophysiological metrics related to higher-order cognitive processes, discussed in terms of cognitive theory and practical significance.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2055','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_2055\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Cognitive flexibility (CF) allows individuals to adapt their behavior to changing environmental demands. As task complexity increases, CF may substantially impact performance by facilitating a shift towards more efficient information processing strategies. However, its role in tasks with high cognitive demands remains largely unexplored. Furthermore, while CF is associated with inhibitory control and working memory functions, their precise relationship under task demands is not yet fully understood. To address this gap, we investigated how CF and inhibition metrics are associated with different levels of mental attentional demand (Md), as well as \u0421F. Additionally, we explored differences in eye-movement indices associated with high and low CF in tasks with varied levels of Md. Analyzing data from 42 young participants performing CF, inhibition, and mental attention tasks with eye movement recording for the last task, we found that multidimensional switching (i.e., switching between three rules) correlated with mental attentional capacity, whereas two-dimensional switching (i.e., switching between two rules) correlated with inhibitory control. Individuals with low and high switching scores differed in task performance and eye-movement patterns of mental attentional demand (i.e., difficulty). Specifically, those with high efficiency in multidimensional switching exhibited superior performance across all levels of mental attentional demand. Further, high-efficiency performers employed eye-movement patterns characterized by an increased number of fixations, shorter fixation durations, and decreased blink rates, with significant differences observed at higher levels of mental-attention demand. Our findings offer new insights into psychophysiological metrics related to higher-order cognitive processes, discussed in terms of cognitive theory and practical significance.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2055','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_2055\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s00426-025-02229-7\" title=\"Follow DOI:10.1007\/s00426-025-02229-7\" target=\"_blank\">doi:10.1007\/s00426-025-02229-7<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2055','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Hongda Zhao; Wei Du; Chao Wang<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13852','tp_abstract')\" style=\"cursor:pointer;\">Cognitive visual strategies are associated with delivery accuracy in elite wheelchair curling: Insights from eye-tracking and machine learning<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Frontiers in Psychology, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201310, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13852\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13852','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13852\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13852','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13852\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13852','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13852\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zhao2026,<br \/>\r\ntitle = {Cognitive visual strategies are associated with delivery accuracy in elite wheelchair curling: Insights from eye-tracking and machine learning},<br \/>\r\nauthor = {Hongda Zhao and Wei Du and Chao Wang},<br \/>\r\ndoi = {10.3389\/fpsyg.2025.1682654},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Frontiers in Psychology},<br \/>\r\nvolume = {16},<br \/>\r\npages = {1\u201310},<br \/>\r\nabstract = {Visual search is pivotal for athletic performance, yet its role in adaptive sports like wheelchair curling remains understudied. This study investigated how eye-movement features predict delivery accuracy and distinguish elite from novice athletes. Thirty wheelchair curling athletes (15 experts, 15 novices) performed standardized delivery accuracy and visual search tasks, with eye movements recorded using the EyeLink Portable Duo system. We employed multiple regression to identify predictors of accuracy and a support vector machine (SVM) to classify athletes based on expertise. Experts demonstrated superior delivery accuracy and significantly more efficient visual search patterns, characterized by shorter dwell times, faster reaction times, and fewer fixations. The SVM model successfully classified athletes with 90% accuracy (AUC = 0.93), while regression analysis confirmed that specific gaze metrics were robust factors associated with performance. These findings establish a strong quantitative link between efficient gaze strategies and expert motor performance in a constrained-mobility setting. This integrated eye-tracking and machine learning approach offers a powerful framework for objectively evaluating performance and developing data-driven, personalized training interventions in wheelchair curling and other precision-focused adaptive sports.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13852','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13852\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Visual search is pivotal for athletic performance, yet its role in adaptive sports like wheelchair curling remains understudied. This study investigated how eye-movement features predict delivery accuracy and distinguish elite from novice athletes. Thirty wheelchair curling athletes (15 experts, 15 novices) performed standardized delivery accuracy and visual search tasks, with eye movements recorded using the EyeLink Portable Duo system. We employed multiple regression to identify predictors of accuracy and a support vector machine (SVM) to classify athletes based on expertise. Experts demonstrated superior delivery accuracy and significantly more efficient visual search patterns, characterized by shorter dwell times, faster reaction times, and fewer fixations. The SVM model successfully classified athletes with 90% accuracy (AUC = 0.93), while regression analysis confirmed that specific gaze metrics were robust factors associated with performance. These findings establish a strong quantitative link between efficient gaze strategies and expert motor performance in a constrained-mobility setting. This integrated eye-tracking and machine learning approach offers a powerful framework for objectively evaluating performance and developing data-driven, personalized training interventions in wheelchair curling and other precision-focused adaptive sports.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13852','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13852\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3389\/fpsyg.2025.1682654\" title=\"Follow DOI:10.3389\/fpsyg.2025.1682654\" target=\"_blank\">doi:10.3389\/fpsyg.2025.1682654<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13852','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Huan Zhang; Keyin Chen; Pengfei Xu; Xin Zhao<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13747','tp_abstract')\" style=\"cursor:pointer;\">Impact of emotional working memory training on threat-related attentional bias in social anxiety: Evidence from eye movements<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Affective Disorders, <\/span><span class=\"tp_pub_additional_volume\">vol. 393, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201311, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13747\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13747','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13747\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13747','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13747\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13747','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13747\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zhang2026a,<br \/>\r\ntitle = {Impact of emotional working memory training on threat-related attentional bias in social anxiety: Evidence from eye movements},<br \/>\r\nauthor = {Huan Zhang and Keyin Chen and Pengfei Xu and Xin Zhao},<br \/>\r\ndoi = {10.1016\/j.jad.2025.120358},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Journal of Affective Disorders},<br \/>\r\nvolume = {393},<br \/>\r\npages = {1\u201311},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {Threat-related attentional bias is a core characteristic of social anxiety and is closely associated with impaired attentional control. While traditional working memory training (WM-T) improves cognitive control and emotional regulation, it does not address emotional information processing. Emotional working memory training (EWM-T), which integrates negative emotional stimuli, may enhance control over negative information. This study hypothesizes that EWM-T can reduce threat-related attentional bias in socially anxious individuals and outperform WM-T in decreasing sustained attention to negative stimuli. Two experiments were conducted to investigate the effects of EWM-T. Experiment 1 employed a dot-probe task and eye-tracking to examine threat-related attentional bias in high and low social anxiety groups. Experiment 2 compared EWM-T with WM-T in a randomized controlled trial, in which participants with high social anxiety completed 20 training sessions over 30 days. Transfer effects were evaluated pre- and post-training using the Stroop task, number-switching task, digit-span task, and active memory task. In Experiment 1, individuals with high social anxiety exhibited greater attentional vigilance and faster detection of threat stimuli. In Experiment 2, both groups showed reductions in anxiety symptoms and practice-related improvements on several cognitive tasks, with no Group \u00d7 Time interactions. Post-training eye-tracking data revealed a decrease in fixation bias toward threat stimuli, indicating improved attentional control. These findings suggest that EWM-T enhances attentional orientation and alleviates anxiety symptoms in social anxiety, with stronger transfer effects compared to WM-T. Incorporating emotional content into working memory training offers advantages for clinical interventions in social anxiety.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13747','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13747\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Threat-related attentional bias is a core characteristic of social anxiety and is closely associated with impaired attentional control. While traditional working memory training (WM-T) improves cognitive control and emotional regulation, it does not address emotional information processing. Emotional working memory training (EWM-T), which integrates negative emotional stimuli, may enhance control over negative information. This study hypothesizes that EWM-T can reduce threat-related attentional bias in socially anxious individuals and outperform WM-T in decreasing sustained attention to negative stimuli. Two experiments were conducted to investigate the effects of EWM-T. Experiment 1 employed a dot-probe task and eye-tracking to examine threat-related attentional bias in high and low social anxiety groups. Experiment 2 compared EWM-T with WM-T in a randomized controlled trial, in which participants with high social anxiety completed 20 training sessions over 30 days. Transfer effects were evaluated pre- and post-training using the Stroop task, number-switching task, digit-span task, and active memory task. In Experiment 1, individuals with high social anxiety exhibited greater attentional vigilance and faster detection of threat stimuli. In Experiment 2, both groups showed reductions in anxiety symptoms and practice-related improvements on several cognitive tasks, with no Group \u00d7 Time interactions. Post-training eye-tracking data revealed a decrease in fixation bias toward threat stimuli, indicating improved attentional control. These findings suggest that EWM-T enhances attentional orientation and alleviates anxiety symptoms in social anxiety, with stronger transfer effects compared to WM-T. Incorporating emotional content into working memory training offers advantages for clinical interventions in social anxiety.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13747','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13747\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.jad.2025.120358\" title=\"Follow DOI:10.1016\/j.jad.2025.120358\" target=\"_blank\">doi:10.1016\/j.jad.2025.120358<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13747','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Xuefei Yu; Atul Gopal; Ken-ichi Inoue; Martin O. Bohlen; Genevieve M. Kuczewski; Marc A. Sommer; Hendrikje Nienborg; Masahiko Takada; Okihide Hikosaka<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13603','tp_abstract')\" style=\"cursor:pointer;\">Retrograde optogenetics reveals sensorimotor convergence within a corticotectal pathway of non-human primates<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Current Biology, <\/span><span class=\"tp_pub_additional_volume\">vol. 36, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 236\u2013242, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13603\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13603','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13603\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13603','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13603\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13603','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13603\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Yu2026,<br \/>\r\ntitle = {Retrograde optogenetics reveals sensorimotor convergence within a corticotectal pathway of non-human primates},<br \/>\r\nauthor = {Xuefei Yu and Atul Gopal and Ken-ichi Inoue and Martin O. Bohlen and Genevieve M. Kuczewski and Marc A. Sommer and Hendrikje Nienborg and Masahiko Takada and Okihide Hikosaka},<br \/>\r\ndoi = {10.1016\/j.cub.2025.11.021},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Current Biology},<br \/>\r\nvolume = {36},<br \/>\r\nnumber = {1},<br \/>\r\npages = {236\u2013242},<br \/>\r\nabstract = {Understanding how the cerebral cortex communicates with subcortical areas to drive behavior remains a central question in system neuroscience. One key unresolved issue is whether prefrontal cortical outputs to motor-related subcortical regions carry predominantly motor commands1 or mixed sensory-motor signals.2,3 Retrograde optogenetics offers a powerful way to interrogate such projection-defined circuits,4\u20137 but its use in non-human primates has been limited.8\u201311 Here, we applied retrograde optogenetics in awake macaques to directly test the functional organization of the corticotectal projection from the frontal eye field (FEF) to the superior colliculus (SC). We asked whether the FEF output signals to SC are motor-dominant or broadly sensory-motor. Optical activation of this pathway evoked robust, contralateral saccades and selectively modulated reaction times, demonstrating its causal role in saccade generation. Optogenetically tagging FEF neurons pro- jecting to SC revealed a heterogeneous population of visual, visuomotor, and motor neurons. This diverse output converged predominantly onto motor-related neurons in the SC. These findings support a visuomotor convergence model, in which diverse FEF outputs drive motor-selective SC neurons with activity sufficient for saccade generation, and thus resolve long-standing questions over the composition of FEF outputs. Additionally, our results establish retrograde optogenetics as a tool for dissecting projection-defined circuits in primates and for precisely probing the neural pathways that link perception to action.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13603','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13603\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Understanding how the cerebral cortex communicates with subcortical areas to drive behavior remains a central question in system neuroscience. One key unresolved issue is whether prefrontal cortical outputs to motor-related subcortical regions carry predominantly motor commands1 or mixed sensory-motor signals.2,3 Retrograde optogenetics offers a powerful way to interrogate such projection-defined circuits,4\u20137 but its use in non-human primates has been limited.8\u201311 Here, we applied retrograde optogenetics in awake macaques to directly test the functional organization of the corticotectal projection from the frontal eye field (FEF) to the superior colliculus (SC). We asked whether the FEF output signals to SC are motor-dominant or broadly sensory-motor. Optical activation of this pathway evoked robust, contralateral saccades and selectively modulated reaction times, demonstrating its causal role in saccade generation. Optogenetically tagging FEF neurons pro- jecting to SC revealed a heterogeneous population of visual, visuomotor, and motor neurons. This diverse output converged predominantly onto motor-related neurons in the SC. These findings support a visuomotor convergence model, in which diverse FEF outputs drive motor-selective SC neurons with activity sufficient for saccade generation, and thus resolve long-standing questions over the composition of FEF outputs. Additionally, our results establish retrograde optogenetics as a tool for dissecting projection-defined circuits in primates and for precisely probing the neural pathways that link perception to action.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13603','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13603\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.cub.2025.11.021\" title=\"Follow DOI:10.1016\/j.cub.2025.11.021\" target=\"_blank\">doi:10.1016\/j.cub.2025.11.021<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13603','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Songqiao Xie; Chunyan He<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13293','tp_abstract')\" style=\"cursor:pointer;\">An empirical study on native Mandarin-speaking children's metonymy comprehension development<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal ofChild Language, <\/span><span class=\"tp_pub_additional_volume\">vol. 53, <\/span><span class=\"tp_pub_additional_pages\">pp. 80\u2013107, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13293\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13293','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13293\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13293','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13293\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Xie2026,<br \/>\r\ntitle = {An empirical study on native Mandarin-speaking children's metonymy comprehension development},<br \/>\r\nauthor = {Songqiao Xie and Chunyan He},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Journal ofChild Language},<br \/>\r\nvolume = {53},<br \/>\r\npages = {80\u2013107},<br \/>\r\nabstract = {This study investigatesMandarin-speaking children's(age 3\u20137) comprehension development ofnovel and conventional metonymy, combining online and offline methods. Both online and offline data show significantly better performances from the oldest group (6-to-7-year-old) and a delayed acquisition of conventional metonymy compared with novel metonymy. However, part of offline data shows no significant difference between adjacent age groups, while the eye-tracking data show a chronological development fromage 3\u20137. Furthermore, in offline tasks, the three-year-old group features a high choice randomness and the four-to-five- year-olds show the longest reaction time. Therefore, we argue that, not only age but also metonymy type can influence metonymy acquisition, and that a lack of socio-cultural experience can be a source of acquisition difficulty for children under six. Methodologically speaking, we believe that online methods should not be considered superior to offline ones as they investigate different aspects of implicit and explicit language comprehension.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13293','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13293\" style=\"display:none;\"><div class=\"tp_abstract_entry\">This study investigatesMandarin-speaking children's(age 3\u20137) comprehension development ofnovel and conventional metonymy, combining online and offline methods. Both online and offline data show significantly better performances from the oldest group (6-to-7-year-old) and a delayed acquisition of conventional metonymy compared with novel metonymy. However, part of offline data shows no significant difference between adjacent age groups, while the eye-tracking data show a chronological development fromage 3\u20137. Furthermore, in offline tasks, the three-year-old group features a high choice randomness and the four-to-five- year-olds show the longest reaction time. Therefore, we argue that, not only age but also metonymy type can influence metonymy acquisition, and that a lack of socio-cultural experience can be a source of acquisition difficulty for children under six. Methodologically speaking, we believe that online methods should not be considered superior to offline ones as they investigate different aspects of implicit and explicit language comprehension.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13293','tp_abstract')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Wiktor Wicec\u0142awski; Jakub Paszulewicz<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13022','tp_abstract')\" style=\"cursor:pointer;\">ERP evidence of attentional selection outside of effective oculomotor range<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Experimental Brain Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 244, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20139, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13022\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13022','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13022\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13022','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13022\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13022','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13022\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Wi\u0229clawski2026,<br \/>\r\ntitle = {ERP evidence of attentional selection outside of effective oculomotor range},<br \/>\r\nauthor = {Wiktor Wicec\u0142awski and Jakub Paszulewicz},<br \/>\r\ndoi = {10.1007\/s00221-025-07219-0},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Experimental Brain Research},<br \/>\r\nvolume = {244},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u20139},<br \/>\r\npublisher = {Springer Science and Business Media Deutschland GmbH},<br \/>\r\nabstract = {The close link between visual attention and the oculomotor system is well documented. Within the selection-for-action framework, two perspectives exist. According to Visual Attention Model (VAM) attention is seen as a prerequisite for successful movement execution, though it is considered a distinct cognitive and neural process. By contrast, the premotor theory of attention (PMTA) argues that the beneficial effects of attention are fully accounted for by the system's preparation for saccadic eye movements. From this standpoint, a central prediction emerges: attentional advantages should be confined to regions within the oculomotor range, since saccadic planning is not feasible outside those limits. A common way to examine this prediction is to present cues and targets in a hemifield beyond the oculomotor range, typically achieved by occluding one eye while abducting the other. Using this method, Smith et al. showed that in a visual search task, exogenous orienting is reduced in the temporal hemifield when the eye is abducted. They concluded that exogenous attentional orienting is constrained by the range of potential saccadic movements. In our study, we sought to replicate Smith et al.'s findings while extending the paradigm with EEG recordings\u2014an approach not yet applied in this context. PMTA predicts that, under eye abduction, stimuli appearing in the temporal hemifield would yield diminished N2pc amplitudes. An ANOVA revealed no reduction of N2pc amplitude in the temporal hemifield. Taken together, our results support the growing body of evidence suggesting that visual attention is not strictly bound to the oculomotor range.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13022','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13022\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The close link between visual attention and the oculomotor system is well documented. Within the selection-for-action framework, two perspectives exist. According to Visual Attention Model (VAM) attention is seen as a prerequisite for successful movement execution, though it is considered a distinct cognitive and neural process. By contrast, the premotor theory of attention (PMTA) argues that the beneficial effects of attention are fully accounted for by the system's preparation for saccadic eye movements. From this standpoint, a central prediction emerges: attentional advantages should be confined to regions within the oculomotor range, since saccadic planning is not feasible outside those limits. A common way to examine this prediction is to present cues and targets in a hemifield beyond the oculomotor range, typically achieved by occluding one eye while abducting the other. Using this method, Smith et al. showed that in a visual search task, exogenous orienting is reduced in the temporal hemifield when the eye is abducted. They concluded that exogenous attentional orienting is constrained by the range of potential saccadic movements. In our study, we sought to replicate Smith et al.'s findings while extending the paradigm with EEG recordings\u2014an approach not yet applied in this context. PMTA predicts that, under eye abduction, stimuli appearing in the temporal hemifield would yield diminished N2pc amplitudes. An ANOVA revealed no reduction of N2pc amplitude in the temporal hemifield. Taken together, our results support the growing body of evidence suggesting that visual attention is not strictly bound to the oculomotor range.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13022','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13022\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s00221-025-07219-0\" title=\"Follow DOI:10.1007\/s00221-025-07219-0\" target=\"_blank\">doi:10.1007\/s00221-025-07219-0<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13022','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Yang Wang; Lei Zhang; Jon D. Elhai; Christian Montag; Haibo Yang<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12769','tp_abstract')\" style=\"cursor:pointer;\">The interacting role of fear of missing out in attentional bias dynamics during problematic social media use<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Addictive Behaviors, <\/span><span class=\"tp_pub_additional_volume\">vol. 173, <\/span><span class=\"tp_pub_additional_number\">no. 393, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20138, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12769\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12769','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12769\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12769','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12769\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12769','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12769\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Wang2026,<br \/>\r\ntitle = {The interacting role of fear of missing out in attentional bias dynamics during problematic social media use},<br \/>\r\nauthor = {Yang Wang and Lei Zhang and Jon D. Elhai and Christian Montag and Haibo Yang},<br \/>\r\ndoi = {10.1016\/j.addbeh.2025.108550},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Addictive Behaviors},<br \/>\r\nvolume = {173},<br \/>\r\nnumber = {393},<br \/>\r\npages = {1\u20138},<br \/>\r\npublisher = {Elsevier Ltd},<br \/>\r\nabstract = {Problematic social media use (PSMU) is increasingly conceptualized as a behavioral addiction involving attentional bias toward social media icons. Although fear of missing out (FoMO) contributes to PSMU maintenance, its dynamic interactive role in attentional bias dynamics remains unclear. Guided by the I-PACE model and attentional bias theory, this study examined whether and when FoMO modulates gaze-based attentional bias toward social media icons in PSMU. 912 university students completed online screening for PSMU and FoMO; 55 meeting PSMU criteria (Mage = 19.60) were categorized into high- or low-FoMO groups. Participants performed a visual dot-probe task with social\/non-social app icons while eye-tracking recorded gaze behavior across four 500 ms time windows. Results revealed FoMO significantly interacted with attentional bias in two critical phases: During early processing (0\u2013500 ms), the PSMU\/high-FoMO group exhibited attentional orienting deceleration to social media icons, whereas PSMU\/low-FoMO showed attentional maintenance. In later processing (1000\u20131500 ms), PSMU\/high-FoMO demonstrated attentional vigilance-maintenance, while PSMU\/low-FoMO displayed avoidance. These findings indicate FoMO exerts a temporally dynamic interaction effect on attentional bias in PSMU\u2014characterized by initial orienting delays followed by sustained attentional engagement with social media icons. This supports reconceptualizing FoMO as a core psychological mechanism that reinforces PSMU through biased attentional dynamics, advancing theoretical alignment with the I-PACE framework.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12769','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12769\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Problematic social media use (PSMU) is increasingly conceptualized as a behavioral addiction involving attentional bias toward social media icons. Although fear of missing out (FoMO) contributes to PSMU maintenance, its dynamic interactive role in attentional bias dynamics remains unclear. Guided by the I-PACE model and attentional bias theory, this study examined whether and when FoMO modulates gaze-based attentional bias toward social media icons in PSMU. 912 university students completed online screening for PSMU and FoMO; 55 meeting PSMU criteria (Mage = 19.60) were categorized into high- or low-FoMO groups. Participants performed a visual dot-probe task with social\/non-social app icons while eye-tracking recorded gaze behavior across four 500 ms time windows. Results revealed FoMO significantly interacted with attentional bias in two critical phases: During early processing (0\u2013500 ms), the PSMU\/high-FoMO group exhibited attentional orienting deceleration to social media icons, whereas PSMU\/low-FoMO showed attentional maintenance. In later processing (1000\u20131500 ms), PSMU\/high-FoMO demonstrated attentional vigilance-maintenance, while PSMU\/low-FoMO displayed avoidance. These findings indicate FoMO exerts a temporally dynamic interaction effect on attentional bias in PSMU\u2014characterized by initial orienting delays followed by sustained attentional engagement with social media icons. This supports reconceptualizing FoMO as a core psychological mechanism that reinforces PSMU through biased attentional dynamics, advancing theoretical alignment with the I-PACE framework.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12769','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12769\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.addbeh.2025.108550\" title=\"Follow DOI:10.1016\/j.addbeh.2025.108550\" target=\"_blank\">doi:10.1016\/j.addbeh.2025.108550<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12769','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Mingze Sun; Zhe Qu; Yajie Wang; Jingwen Xiang; Yulong Ding<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11518','tp_abstract')\" style=\"cursor:pointer;\">A well-trained nonsalient shape captures attention with delayed inhibition of return<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Psychonomic Bulletin &amp; Review, <\/span><span class=\"tp_pub_additional_volume\">vol. 33, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11518\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11518','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11518\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11518','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11518\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11518','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11518\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Sun2026,<br \/>\r\ntitle = {A well-trained nonsalient shape captures attention with delayed inhibition of return},<br \/>\r\nauthor = {Mingze Sun and Zhe Qu and Yajie Wang and Jingwen Xiang and Yulong Ding},<br \/>\r\ndoi = {10.3758\/s13423-025-02791-6},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Psychonomic Bulletin & Review},<br \/>\r\nvolume = {33},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201316},<br \/>\r\npublisher = {Springer},<br \/>\r\nabstract = {Numerous studies adopting Posner peripheral cueing paradigms have shown that exogenous attentional orientation (EAO) to a salient-but-irrelevant stimulus involves two opposing attentional processes: early attentional capture and late attentional suppression. Recent evidence has indicated that long-term perceptual learning can induce involuntary attentional capture by nonsalient shapes. However, it remains unclear whether a well-trained nonsalient shape could exhibit a biphasic pattern of EAO similar to that observed with physically salient stimuli, including both an early exogenous attentional shift and a late inhibition of return (IOR). Through both a perceptual learning task and a classic peripheral cueing task, the current study showed that a well-trained nonsalient shape cue could exhibit a biphasic pattern of EAO. When compared with an untrained shape, a well-trained nonsalient shape facilitated subsequent target detection at short cue-target onset asynchronies (CTOAs, 200\u2013300 ms) and deteriorated target detection at a relatively long CTOA (800 ms), but not at 400- to 600-ms CTOAs. As a comparison, a detectability-matched onset cue or luminance contrast cue elicited a facilitatory effect at 200- to 300-ms CTOAs and an inhibitory effect starting from 400-ms CTOA. A control eye-tracking experiment suggested that the absence of IOR effects at 400- to 600-ms CTOAs in the trained cue task was not due to fewer eye movements during the task. Our results indicated that, as opposed to physically salient stimuli, a well-trained nonsalient shape induced delayed IOR after an evident exogenous shift of visual attention. The different patterns of EAO processes support the notion that prior experience (such as perceptual learning) plays a unique role in modulating our exogenous attention. Possible underlying mechanisms are proposed.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11518','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11518\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Numerous studies adopting Posner peripheral cueing paradigms have shown that exogenous attentional orientation (EAO) to a salient-but-irrelevant stimulus involves two opposing attentional processes: early attentional capture and late attentional suppression. Recent evidence has indicated that long-term perceptual learning can induce involuntary attentional capture by nonsalient shapes. However, it remains unclear whether a well-trained nonsalient shape could exhibit a biphasic pattern of EAO similar to that observed with physically salient stimuli, including both an early exogenous attentional shift and a late inhibition of return (IOR). Through both a perceptual learning task and a classic peripheral cueing task, the current study showed that a well-trained nonsalient shape cue could exhibit a biphasic pattern of EAO. When compared with an untrained shape, a well-trained nonsalient shape facilitated subsequent target detection at short cue-target onset asynchronies (CTOAs, 200\u2013300 ms) and deteriorated target detection at a relatively long CTOA (800 ms), but not at 400- to 600-ms CTOAs. As a comparison, a detectability-matched onset cue or luminance contrast cue elicited a facilitatory effect at 200- to 300-ms CTOAs and an inhibitory effect starting from 400-ms CTOA. A control eye-tracking experiment suggested that the absence of IOR effects at 400- to 600-ms CTOAs in the trained cue task was not due to fewer eye movements during the task. Our results indicated that, as opposed to physically salient stimuli, a well-trained nonsalient shape induced delayed IOR after an evident exogenous shift of visual attention. The different patterns of EAO processes support the notion that prior experience (such as perceptual learning) plays a unique role in modulating our exogenous attention. Possible underlying mechanisms are proposed.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11518','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11518\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3758\/s13423-025-02791-6\" title=\"Follow DOI:10.3758\/s13423-025-02791-6\" target=\"_blank\">doi:10.3758\/s13423-025-02791-6<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11518','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Waxun Su; Xiao Lin; Weijian Liu; Tak Kwan Lam; Peng Li; Qiandong Wang<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11481','tp_abstract')\" style=\"cursor:pointer;\">The impact of depression and social anxiety on eye orientation and disengagement in individuals with and without depression<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Psychiatric Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 192, <\/span><span class=\"tp_pub_additional_pages\">pp. 325\u2013331, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11481\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11481','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11481\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11481','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11481\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11481','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11481\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Su2026,<br \/>\r\ntitle = {The impact of depression and social anxiety on eye orientation and disengagement in individuals with and without depression},<br \/>\r\nauthor = {Waxun Su and Xiao Lin and Weijian Liu and Tak Kwan Lam and Peng Li and Qiandong Wang},<br \/>\r\ndoi = {10.1016\/j.jpsychires.2025.10.077},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Journal of Psychiatric Research},<br \/>\r\nvolume = {192},<br \/>\r\npages = {325\u2013331},<br \/>\r\nabstract = {In individuals with depression, the comorbidity with social anxiety disorder is prevalent that often exacerbates symptoms and social dysfunction, such as exhibiting more severe social avoidance and interpersonal impairment. Our study used the eye-tracking technique to explore how depression and social anxiety, individually and in combination, influence orientation toward and disengagement from the eyes in individuals diagnosed with depression or not. Participants were 49 healthy individuals and 64 individuals with depression, whose gaze was initially guided to the eye or mouth region immediately before the onset of the face. Latency to disengage from the guided regions and latency to orient to the eyes following the onset of the face were measured. The findings revealed that, firstly, individuals showed delayed disengagement from the eyes compared to the mouth regardless of depression diagnosis or social anxiety level. Secondly, in healthy individuals, increased social anxiety was related to quick eye orientation. Thirdly, in individuals with depression, longer disengagement latencies from the eyes were associated with higher levels of depression or social anxiety, but only when one of the scores was high, not medium or low. These findings highlight the importance of understanding the distinct and combined impacts of depression and social anxiety on clinical and nonclinical individuals, informing more targeted clinical interventions and assessment strategies.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11481','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11481\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In individuals with depression, the comorbidity with social anxiety disorder is prevalent that often exacerbates symptoms and social dysfunction, such as exhibiting more severe social avoidance and interpersonal impairment. Our study used the eye-tracking technique to explore how depression and social anxiety, individually and in combination, influence orientation toward and disengagement from the eyes in individuals diagnosed with depression or not. Participants were 49 healthy individuals and 64 individuals with depression, whose gaze was initially guided to the eye or mouth region immediately before the onset of the face. Latency to disengage from the guided regions and latency to orient to the eyes following the onset of the face were measured. The findings revealed that, firstly, individuals showed delayed disengagement from the eyes compared to the mouth regardless of depression diagnosis or social anxiety level. Secondly, in healthy individuals, increased social anxiety was related to quick eye orientation. Thirdly, in individuals with depression, longer disengagement latencies from the eyes were associated with higher levels of depression or social anxiety, but only when one of the scores was high, not medium or low. These findings highlight the importance of understanding the distinct and combined impacts of depression and social anxiety on clinical and nonclinical individuals, informing more targeted clinical interventions and assessment strategies.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11481','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11481\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.jpsychires.2025.10.077\" title=\"Follow DOI:10.1016\/j.jpsychires.2025.10.077\" target=\"_blank\">doi:10.1016\/j.jpsychires.2025.10.077<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11481','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Anjum Shaikh; Idah Mbithi; Maiko Okamura; Skylar Rice; Lily Rosan; Fabio Solorzano Quesada; Trafton Drew; Brennan Payne; Jeff Moher<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10840','tp_abstract')\" style=\"cursor:pointer;\">Distractor avoidance and early quitting in visual search<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Attention, Perception &amp; Psychophysics, <\/span><span class=\"tp_pub_additional_volume\">vol. 88, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201313, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10840\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10840','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10840\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10840','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10840\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10840','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10840\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Shaikh2026,<br \/>\r\ntitle = {Distractor avoidance and early quitting in visual search},<br \/>\r\nauthor = {Anjum Shaikh and Idah Mbithi and Maiko Okamura and Skylar Rice and Lily Rosan and Fabio Solorzano Quesada and Trafton Drew and Brennan Payne and Jeff Moher},<br \/>\r\ndoi = {10.3758\/s13414-025-03188-2},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Attention, Perception & Psychophysics},<br \/>\r\nvolume = {88},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201313},<br \/>\r\nabstract = {In the current study, we examined the mechanisms underpinning how salient distractors produce early quitting in visual search. Participants completed a simple visual search task and indicated whether a target was present or absent. When salient distractors were present, fewer eye movements occurred before target-absent responses, and less of the display area was searched. Surprisingly, participants actively avoided directing eye movements towards the distractor. Still, salient distractors increased both search errors, which were committed when the target was never fixated, and decision errors, which were committed when the target was fixated but not detected. Our results demonstrate that salient distractors trigger early quitting by reducing the amount of information that observers extract from the search image and disrupting search guidance.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10840','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10840\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In the current study, we examined the mechanisms underpinning how salient distractors produce early quitting in visual search. Participants completed a simple visual search task and indicated whether a target was present or absent. When salient distractors were present, fewer eye movements occurred before target-absent responses, and less of the display area was searched. Surprisingly, participants actively avoided directing eye movements towards the distractor. Still, salient distractors increased both search errors, which were committed when the target was never fixated, and decision errors, which were committed when the target was fixated but not detected. Our results demonstrate that salient distractors trigger early quitting by reducing the amount of information that observers extract from the search image and disrupting search guidance.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10840','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10840\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3758\/s13414-025-03188-2\" title=\"Follow DOI:10.3758\/s13414-025-03188-2\" target=\"_blank\">doi:10.3758\/s13414-025-03188-2<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10840','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Thomas Seacrist; Elizabeth A. Walshe; Shukai Cheng; Emily Brown; Charlotte Birnbaum; Victoria Kaufman; Flaura K. Winston; William C. Gaetz<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10749','tp_abstract')\" style=\"cursor:pointer;\">A novel paradigm for identifying eye-tracking metrics associated with cognitive control during driving through MEG neuroimaging<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Transportation Research Part F: Traffic Psychology and Behaviour, <\/span><span class=\"tp_pub_additional_volume\">vol. 116, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201313, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10749\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10749','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10749\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10749','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10749\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10749','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10749\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Seacrist2026,<br \/>\r\ntitle = {A novel paradigm for identifying eye-tracking metrics associated with cognitive control during driving through MEG neuroimaging},<br \/>\r\nauthor = {Thomas Seacrist and Elizabeth A. Walshe and Shukai Cheng and Emily Brown and Charlotte Birnbaum and Victoria Kaufman and Flaura K. Winston and William C. Gaetz},<br \/>\r\ndoi = {10.1016\/j.trf.2025.103434},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Transportation Research Part F: Traffic Psychology and Behaviour},<br \/>\r\nvolume = {116},<br \/>\r\npages = {1\u201313},<br \/>\r\npublisher = {Elsevier Ltd},<br \/>\r\nabstract = {Understanding the neurocognitive underpinnings of driving behavior in adolescents is critical to improving road safety. To address this, we established a novel paradigm linking magnetoencephalography (MEG)-recorded frequency-specific brain activity to simulated driving performance, identifying periods of increased cognitive control. However, this initial paradigm did not incorporate eye-tracking \u2013 a potentially scalable proxy for cognitive control that could be leveraged by in-vehicle driver monitoring systems. This proof-of-concept study expands our paradigm by integrating eye-tracking to identify scanning behavior metrics associated with periods of increased cognitive control validated by MEG. Typically developing adolescents (n = 11; mean age = 15.1 \u00b1 1.5 yrs) completed three driving tasks of varying cognitive demand, and MEG frequency specific analysis confirmed periods of high (Hi) and low (Lo) cognitive control via the established biomarker of frontal midline theta (FMT). Fixation count, fixation duration, horizontal\/vertical mean gaze position, saccade amplitude, and horizontal\/vertical spread of search were compared between Hi vs. Lo periods of cognitive control. Task-specific differences in fixation count (p &lt; 0.05), mean gaze position (p &lt; 0.01), saccade amplitude (p &lt; 0.05), and spread of search (p &lt; 0.01) were observed between Hi compared to Lo cognitive control periods. These differences corresponded to expected task-specific changes in scanning behavior that would accompany cognitive control over behavior, suggesting a signal that eye-tracking may serve as a proxy for underlying neurocognitive processes. This integrated approach demonstrates methodological rigor and offers a promising framework for further research and informing development of in-vehicle driver monitoring systems for detecting cognitive deficits in real time, with implications for enhancing teen driver safety.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10749','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10749\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Understanding the neurocognitive underpinnings of driving behavior in adolescents is critical to improving road safety. To address this, we established a novel paradigm linking magnetoencephalography (MEG)-recorded frequency-specific brain activity to simulated driving performance, identifying periods of increased cognitive control. However, this initial paradigm did not incorporate eye-tracking \u2013 a potentially scalable proxy for cognitive control that could be leveraged by in-vehicle driver monitoring systems. This proof-of-concept study expands our paradigm by integrating eye-tracking to identify scanning behavior metrics associated with periods of increased cognitive control validated by MEG. Typically developing adolescents (n = 11; mean age = 15.1 \u00b1 1.5 yrs) completed three driving tasks of varying cognitive demand, and MEG frequency specific analysis confirmed periods of high (Hi) and low (Lo) cognitive control via the established biomarker of frontal midline theta (FMT). Fixation count, fixation duration, horizontal\/vertical mean gaze position, saccade amplitude, and horizontal\/vertical spread of search were compared between Hi vs. Lo periods of cognitive control. Task-specific differences in fixation count (p &lt; 0.05), mean gaze position (p &lt; 0.01), saccade amplitude (p &lt; 0.05), and spread of search (p &lt; 0.01) were observed between Hi compared to Lo cognitive control periods. These differences corresponded to expected task-specific changes in scanning behavior that would accompany cognitive control over behavior, suggesting a signal that eye-tracking may serve as a proxy for underlying neurocognitive processes. This integrated approach demonstrates methodological rigor and offers a promising framework for further research and informing development of in-vehicle driver monitoring systems for detecting cognitive deficits in real time, with implications for enhancing teen driver safety.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10749','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10749\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.trf.2025.103434\" title=\"Follow DOI:10.1016\/j.trf.2025.103434\" target=\"_blank\">doi:10.1016\/j.trf.2025.103434<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10749','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Mohammadhossein Salari; Diederick C. Niehorster; Marcus Nystr\u00f6m; Roman Bednarik<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10392','tp_abstract')\" style=\"cursor:pointer;\">The effect of pupil size on data quality in head-mounted eye trackers<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Behavior research methods, <\/span><span class=\"tp_pub_additional_volume\">vol. 58, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10392\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10392','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10392\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10392','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10392\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10392','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10392\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Salari2026,<br \/>\r\ntitle = {The effect of pupil size on data quality in head-mounted eye trackers},<br \/>\r\nauthor = {Mohammadhossein Salari and Diederick C. Niehorster and Marcus Nystr\u00f6m and Roman Bednarik},<br \/>\r\ndoi = {10.3758\/s13428-025-02880-3},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Behavior research methods},<br \/>\r\nvolume = {58},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201316},<br \/>\r\nabstract = {Changes in pupil size can lead to apparent gaze shifts in data recorded with video-based eye trackers in the absence of physical eye rotation. This is known as the pupil-size artifact (PSA). While the PSA is widely reported in desktop eye trackers, it is unknown whether and to what extent it occurs in head-mounted eye trackers. In this paper, we examined the effects of pupil size variations on eye-tracking data quality in four head-mounted eye trackers: the Pupil Core, the Pupil Neon, the SMI ETG 2w, and the Tobii Pro Glasses 2, in addition to a widely used desktop eye tracker, the SR Research EyeLink 1000 Plus. Participants viewed a central target on a monitor while we systematically varied the screen brightness to induce controlled pupil size changes. All head-mounted eye trackers exhibited PSA, with apparent gaze shifts ranging from 0.94 formula presented for the Pupil Neon to 3.46 formula presented for the Pupil Core. Except for the Pupil Neon, all eye trackers exhibited a significant change in accuracy due to pupil size variations. Precision measures showed device-specific effects of pupil size changes, with some eye trackers performing better in the bright condition and others in the dark condition. These findings demonstrated that, just like desktop eye trackers, head-mounted video-based eye trackers exhibited PSA.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10392','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10392\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Changes in pupil size can lead to apparent gaze shifts in data recorded with video-based eye trackers in the absence of physical eye rotation. This is known as the pupil-size artifact (PSA). While the PSA is widely reported in desktop eye trackers, it is unknown whether and to what extent it occurs in head-mounted eye trackers. In this paper, we examined the effects of pupil size variations on eye-tracking data quality in four head-mounted eye trackers: the Pupil Core, the Pupil Neon, the SMI ETG 2w, and the Tobii Pro Glasses 2, in addition to a widely used desktop eye tracker, the SR Research EyeLink 1000 Plus. Participants viewed a central target on a monitor while we systematically varied the screen brightness to induce controlled pupil size changes. All head-mounted eye trackers exhibited PSA, with apparent gaze shifts ranging from 0.94 formula presented for the Pupil Neon to 3.46 formula presented for the Pupil Core. Except for the Pupil Neon, all eye trackers exhibited a significant change in accuracy due to pupil size variations. Precision measures showed device-specific effects of pupil size changes, with some eye trackers performing better in the bright condition and others in the dark condition. These findings demonstrated that, just like desktop eye trackers, head-mounted video-based eye trackers exhibited PSA.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10392','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10392\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3758\/s13428-025-02880-3\" title=\"Follow DOI:10.3758\/s13428-025-02880-3\" target=\"_blank\">doi:10.3758\/s13428-025-02880-3<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10392','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Estelle Raffin; Roberto F. Salamanca-Giron; Krystel R. Huxlin; Olivier Reynaud; Loan Mattera; Roberto Martuzzi; Friedhelm C. Hummel<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9802','tp_abstract')\" style=\"cursor:pointer;\">Causal disconnectomics of motion perception networks: Insights from transcranial magnetic stimulation-induced BOLD responses<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">The Journal of Physiology, <\/span><span class=\"tp_pub_additional_volume\">vol. 604, <\/span><span class=\"tp_pub_additional_pages\">pp. 503\u2013526, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9802\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9802','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9802\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9802','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9802\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9802','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9802\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Raffin2026,<br \/>\r\ntitle = {Causal disconnectomics of motion perception networks: Insights from transcranial magnetic stimulation-induced BOLD responses},<br \/>\r\nauthor = {Estelle Raffin and Roberto F. Salamanca-Giron and Krystel R. Huxlin and Olivier Reynaud and Loan Mattera and Roberto Martuzzi and Friedhelm C. Hummel},<br \/>\r\ndoi = {10.1113\/JP289699#support-information-section},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {The Journal of Physiology},<br \/>\r\nvolume = {604},<br \/>\r\npages = {503\u2013526},<br \/>\r\nabstract = {Understanding how focal perturbations trigger large-scale network reorganization is essential for uncovering the neural mechanisms that support perception and behaviour. Here we used a transcranial magnetic stimulation (TMS) perturbational approach by applying brief 10 Hz TMS to early visual areas (EVAs) or the medio-temporal (MT) area in healthy participants while recording concurrent functional magnetic resonance imaging (fMRI). TMS delivered during the early stages of motion processing specifically impaired direction discrimination at both sites,whereas disruption of the later processing phase impaired performances only for the MT condition. Despite a similar local increase in BOLD activity induced by EVA and MT stimulation, the broader network responses diverged significantly. Perturbation ofEVA elicited a more robust and efficient pattern of functional reorganization, manifesting as more constrained BOLD changes, consistent with greater resilience to focal disruption. In contrast behavioural impairments induced by MT stimulation were accompanied by a disorganized and less-efficient network configuration, characterized by smaller small-world properties and longer path lengths. The decrease in performances induced by MT stimulation scaled with lower clustering coefficients, implying a more random or decentralized network structure. These findings demonstrate that TMS-fMRI coupling provides a powerful framework for causally mapping the relationships between local neural perturbations, large-scale network dynamics and behavioural performance.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9802','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9802\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Understanding how focal perturbations trigger large-scale network reorganization is essential for uncovering the neural mechanisms that support perception and behaviour. Here we used a transcranial magnetic stimulation (TMS) perturbational approach by applying brief 10 Hz TMS to early visual areas (EVAs) or the medio-temporal (MT) area in healthy participants while recording concurrent functional magnetic resonance imaging (fMRI). TMS delivered during the early stages of motion processing specifically impaired direction discrimination at both sites,whereas disruption of the later processing phase impaired performances only for the MT condition. Despite a similar local increase in BOLD activity induced by EVA and MT stimulation, the broader network responses diverged significantly. Perturbation ofEVA elicited a more robust and efficient pattern of functional reorganization, manifesting as more constrained BOLD changes, consistent with greater resilience to focal disruption. In contrast behavioural impairments induced by MT stimulation were accompanied by a disorganized and less-efficient network configuration, characterized by smaller small-world properties and longer path lengths. The decrease in performances induced by MT stimulation scaled with lower clustering coefficients, implying a more random or decentralized network structure. These findings demonstrate that TMS-fMRI coupling provides a powerful framework for causally mapping the relationships between local neural perturbations, large-scale network dynamics and behavioural performance.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9802','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9802\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1113\/JP289699#support-information-section\" title=\"Follow DOI:10.1113\/JP289699#support-information-section\" target=\"_blank\">doi:10.1113\/JP289699#support-information-section<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9802','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Zhongling Pi; Xuemei Huang; Richard E. Mayer; Xin Zhao; Xiying Li<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9414','tp_abstract')\" style=\"cursor:pointer;\">Role of the instructor's social cues in instructional videos<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Education Sciences, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201315, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9414\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9414','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9414\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9414','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9414\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9414','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9414\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Pi2026,<br \/>\r\ntitle = {Role of the instructor's social cues in instructional videos},<br \/>\r\nauthor = {Zhongling Pi and Xuemei Huang and Richard E. Mayer and Xin Zhao and Xiying Li},<br \/>\r\ndoi = {10.3390\/educsci16010082},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Education Sciences},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201315},<br \/>\r\nabstract = {Little attention has been paid to whether an instructor's hand-pointing gestures or use of a mouse-guided arrow can mitigate the attentional loss caused by an instructor's happy facial expressions or can enhance the social benefits of these expressions in instructional videos. The goal of the present study is to determine whether social cues in an instructional video affect learning processes and outcomes. The participants were 57 female students from a university. We employed a 2 \u00d7 2 mixed experimental design. The instructor's facial expression was a within-subject variable, while the type of pointing cue was a between-subject variable. Students who had the smiling instructor rather than the bored instructor gave higher ratings of the perceived positive emotion of the instructor, felt more positive emotion, and had more motivation to learn. Eye-tracking technology showed that students who learned with the smiling instructor spent more time looking at the content on the slides than those who learned with a bored instructor. Students who learned with the smiling instructor scored higher on a learning outcome post-test than those who learned with the bored instructor. Among female Chinese students, this pattern is consistent with the five steps posited by the positivity principle, which concludes that people learn better from instructors who exhibit positive social cues. Pointing with a human hand was not superior to pointing with an arrow, suggesting that in this case hand-pointing was not a strong social cue and did not moderate the effects of facial expression. Given the exclusively female sample, future research should examine whether these effects generalize across genders.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9414','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9414\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Little attention has been paid to whether an instructor's hand-pointing gestures or use of a mouse-guided arrow can mitigate the attentional loss caused by an instructor's happy facial expressions or can enhance the social benefits of these expressions in instructional videos. The goal of the present study is to determine whether social cues in an instructional video affect learning processes and outcomes. The participants were 57 female students from a university. We employed a 2 \u00d7 2 mixed experimental design. The instructor's facial expression was a within-subject variable, while the type of pointing cue was a between-subject variable. Students who had the smiling instructor rather than the bored instructor gave higher ratings of the perceived positive emotion of the instructor, felt more positive emotion, and had more motivation to learn. Eye-tracking technology showed that students who learned with the smiling instructor spent more time looking at the content on the slides than those who learned with a bored instructor. Students who learned with the smiling instructor scored higher on a learning outcome post-test than those who learned with the bored instructor. Among female Chinese students, this pattern is consistent with the five steps posited by the positivity principle, which concludes that people learn better from instructors who exhibit positive social cues. Pointing with a human hand was not superior to pointing with an arrow, suggesting that in this case hand-pointing was not a strong social cue and did not moderate the effects of facial expression. Given the exclusively female sample, future research should examine whether these effects generalize across genders.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9414','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9414\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3390\/educsci16010082\" title=\"Follow DOI:10.3390\/educsci16010082\" target=\"_blank\">doi:10.3390\/educsci16010082<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9414','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Effie J. Pereira; Jelena Ristic<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9301','tp_abstract')\" style=\"cursor:pointer;\">Beauty in the eye of the beholder: Attention to attractive faces dissociates across covert and overt measures<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Attention, Perception, &amp; Psychophysics, <\/span><span class=\"tp_pub_additional_volume\">vol. 88, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201317, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9301\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9301','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9301\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9301','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9301\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9301','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9301\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Pereira2026,<br \/>\r\ntitle = {Beauty in the eye of the beholder: Attention to attractive faces dissociates across covert and overt measures},<br \/>\r\nauthor = {Effie J. Pereira and Jelena Ristic},<br \/>\r\ndoi = {10.3758\/s13414-025-03162-y},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Attention, Perception, & Psychophysics},<br \/>\r\nvolume = {88},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201317},<br \/>\r\npublisher = {Springer},<br \/>\r\nabstract = {Attractive faces attract attention. Here, we examined how facial attractiveness influenced covert and overt social attention. Participants discriminated targets occurring after one of 32 different face\u2013object cue pairs, which varied in the degree of attractiveness. Experiment 1 measured covert social attention in manual responses while maintaining central fixation. No evidence of attentional preference for faces was found. Experiment 2 measured overt social attention in eye movements while maintaining natural viewing conditions. A reliable oculomotor preference for attractive faces was found. Thus, facial attractiveness affects covert and overt social attention differently, reflecting the diverging ways in which faces affect attention with respect to social functioning in daily life. The datasets for all experiments can be found on the Open Science Framework (https:\/\/osf.io\/u54tp\/).},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9301','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9301\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Attractive faces attract attention. Here, we examined how facial attractiveness influenced covert and overt social attention. Participants discriminated targets occurring after one of 32 different face\u2013object cue pairs, which varied in the degree of attractiveness. Experiment 1 measured covert social attention in manual responses while maintaining central fixation. No evidence of attentional preference for faces was found. Experiment 2 measured overt social attention in eye movements while maintaining natural viewing conditions. A reliable oculomotor preference for attractive faces was found. Thus, facial attractiveness affects covert and overt social attention differently, reflecting the diverging ways in which faces affect attention with respect to social functioning in daily life. The datasets for all experiments can be found on the Open Science Framework (https:\/\/osf.io\/u54tp\/).<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9301','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9301\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3758\/s13414-025-03162-y\" title=\"Follow DOI:10.3758\/s13414-025-03162-y\" target=\"_blank\">doi:10.3758\/s13414-025-03162-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9301','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Mario Michiels; David Luque; Ignacio Obeso<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('8064','tp_abstract')\" style=\"cursor:pointer;\">Implicit and explicit reversal of trained oculomotor movements<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurobiology of Learning and Memory, <\/span><span class=\"tp_pub_additional_volume\">vol. 223, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20137, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_8064\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('8064','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_8064\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('8064','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_8064\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('8064','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_8064\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Michiels2026,<br \/>\r\ntitle = {Implicit and explicit reversal of trained oculomotor movements},<br \/>\r\nauthor = {Mario Michiels and David Luque and Ignacio Obeso},<br \/>\r\ndoi = {10.1016\/j.nlm.2025.108114},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Neurobiology of Learning and Memory},<br \/>\r\nvolume = {223},<br \/>\r\npages = {1\u20137},<br \/>\r\npublisher = {Academic Press Inc.},<br \/>\r\nabstract = {Habitual behavior is thought to emerge with extended training and reduced sensitivity to outcome devaluation. However, little is known about how habit-like oculomotor responses adapt when devaluation is implicit or embedded within a previously learned context. We examined this in a novel oculomotor learning task involving visual shape-reward associations with both standard and overtrained stimuli. Twenty-six participants completed a shape-color learning task while their eye movements were recorded using an eye-tracker system (1000 Hz). The task involved 11 blocks, including training, intra-block reversal (implicit stimulus-reward changes), and classical devaluation phases (explicitly instructed reward changes). Statistical analyses were performed using linear mixed-effects models on accuracy and response time (RT) measures. As expected, higher accuracy and faster responses for overtrained versus standard-trained stimuli were observed during training, confirming stronger learning. In the classical devaluation phase, overtrained stimuli elicited significantly more errors compared to standard-trained stimuli, relative to the performance in the training phase. This indicates stronger resistance to goal-directed updating. The effect was more pronounced during intra-block reversal of associations, where reward contingencies changed without warning. While RTs were not affected by classical devaluation, intra-block reversal significantly increased RTs for overtrained stimuli, relative to RTs in the training phase. This suggests a higher cognitive cost for overriding well-learned habitual responses when changes are unpredictable. These findings provide new evidence for the behavioral rigidity associated with overtraining of oculomotor behavior and suggest that unexpected outcome changes impose an additional switch cost on habitual oculomotor behavior.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('8064','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_8064\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Habitual behavior is thought to emerge with extended training and reduced sensitivity to outcome devaluation. However, little is known about how habit-like oculomotor responses adapt when devaluation is implicit or embedded within a previously learned context. We examined this in a novel oculomotor learning task involving visual shape-reward associations with both standard and overtrained stimuli. Twenty-six participants completed a shape-color learning task while their eye movements were recorded using an eye-tracker system (1000 Hz). The task involved 11 blocks, including training, intra-block reversal (implicit stimulus-reward changes), and classical devaluation phases (explicitly instructed reward changes). Statistical analyses were performed using linear mixed-effects models on accuracy and response time (RT) measures. As expected, higher accuracy and faster responses for overtrained versus standard-trained stimuli were observed during training, confirming stronger learning. In the classical devaluation phase, overtrained stimuli elicited significantly more errors compared to standard-trained stimuli, relative to the performance in the training phase. This indicates stronger resistance to goal-directed updating. The effect was more pronounced during intra-block reversal of associations, where reward contingencies changed without warning. While RTs were not affected by classical devaluation, intra-block reversal significantly increased RTs for overtrained stimuli, relative to RTs in the training phase. This suggests a higher cognitive cost for overriding well-learned habitual responses when changes are unpredictable. These findings provide new evidence for the behavioral rigidity associated with overtraining of oculomotor behavior and suggest that unexpected outcome changes impose an additional switch cost on habitual oculomotor behavior.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('8064','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_8064\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.nlm.2025.108114\" title=\"Follow DOI:10.1016\/j.nlm.2025.108114\" target=\"_blank\">doi:10.1016\/j.nlm.2025.108114<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('8064','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Sara LoTemplio; Jack Silcox; David L. Strayer; Brennan R. Payne<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('7248','tp_abstract')\" style=\"cursor:pointer;\">Single\u2010trial relationships between the error\u2010related negativity, pe, error\u2010related pupillary dilation response, and post\u2010error behavior<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Psychophysiology, <\/span><span class=\"tp_pub_additional_volume\">vol. 63, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_7248\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7248','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_7248\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7248','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_7248\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7248','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_7248\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{LoTemplio2026,<br \/>\r\ntitle = {Single\u2010trial relationships between the error\u2010related negativity, pe, error\u2010related pupillary dilation response, and post\u2010error behavior},<br \/>\r\nauthor = {Sara LoTemplio and Jack Silcox and David L. Strayer and Brennan R. Payne},<br \/>\r\ndoi = {10.1111\/psyp.70216},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Psychophysiology},<br \/>\r\nvolume = {63},<br \/>\r\nnumber = {1},<br \/>\r\nabstract = {The amplitude of the error\u2010related negativity (ERN) is known to be correlated with attention to task and general cognitive control abilities. However, previous research has struggled to consistently link ERN amplitude with behavioral accuracy or reaction time in the task from which the ERN is being measured. This lack of relationship could be due to many factors that are difficult to control for, so explorations of other converging measures to understand error\u2010processing and subsequent behavior adjustment are warranted. The current study examines how two other physiological markers of error\u2010processing\u2014the phasic pupillary dilation response (PDR) and the positivity following an error (Pe)\u2014relate to post\u2010error behavior. Additionally, we also examine relationships between the three physiological indices of error\u2010processing. In the study, EEG and pupillometry were simultaneously recorded while participants completed 24 blocks (50 trials each) of an Ericksen Flanker task. For post\u2010error accuracy, we found that on a single\u2010trial level, the amplitude of all three physiological error\u2010processing indices for error trials predicted post\u2010error accuracy. At the subject level, only the PDR predicted average post\u2010error accuracy. For post\u2010error slowing, at the single\u2010trial level, only the Pe predicted post\u2010error slowing, whereas only the ERN predicted post\u2010error slowing at the subject level. We also found that both the ERN and Pe correlated with PDR amplitude. This is consistent with our hypothesis that the Pe and PDR may share underlying neural mechanisms, but qualified by the fact that the ERN, which is not hypothesized to have shared neural mechanisms, also predicted unique variance in pupillary amplitude. Collectively, these results suggest that the PDR and Pe might represent promising indicators of post\u2010error behavior adjustment and highlight the need to examine relationships at multiple levels of analysis.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7248','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_7248\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The amplitude of the error\u2010related negativity (ERN) is known to be correlated with attention to task and general cognitive control abilities. However, previous research has struggled to consistently link ERN amplitude with behavioral accuracy or reaction time in the task from which the ERN is being measured. This lack of relationship could be due to many factors that are difficult to control for, so explorations of other converging measures to understand error\u2010processing and subsequent behavior adjustment are warranted. The current study examines how two other physiological markers of error\u2010processing\u2014the phasic pupillary dilation response (PDR) and the positivity following an error (Pe)\u2014relate to post\u2010error behavior. Additionally, we also examine relationships between the three physiological indices of error\u2010processing. In the study, EEG and pupillometry were simultaneously recorded while participants completed 24 blocks (50 trials each) of an Ericksen Flanker task. For post\u2010error accuracy, we found that on a single\u2010trial level, the amplitude of all three physiological error\u2010processing indices for error trials predicted post\u2010error accuracy. At the subject level, only the PDR predicted average post\u2010error accuracy. For post\u2010error slowing, at the single\u2010trial level, only the Pe predicted post\u2010error slowing, whereas only the ERN predicted post\u2010error slowing at the subject level. We also found that both the ERN and Pe correlated with PDR amplitude. This is consistent with our hypothesis that the Pe and PDR may share underlying neural mechanisms, but qualified by the fact that the ERN, which is not hypothesized to have shared neural mechanisms, also predicted unique variance in pupillary amplitude. Collectively, these results suggest that the PDR and Pe might represent promising indicators of post\u2010error behavior adjustment and highlight the need to examine relationships at multiple levels of analysis.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7248','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_7248\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1111\/psyp.70216\" title=\"Follow DOI:10.1111\/psyp.70216\" target=\"_blank\">doi:10.1111\/psyp.70216<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7248','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Raymond M. Klein; \u015eimal D\u00f6lek; John Christie<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('6150','tp_abstract')\" style=\"cursor:pointer;\">Does the output form of inhibition of return operate at or after the bottleneck?<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Acta Psychologica, <\/span><span class=\"tp_pub_additional_volume\">vol. 262, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20138, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_6150\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6150','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_6150\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6150','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_6150\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6150','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_6150\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Klein2026,<br \/>\r\ntitle = {Does the output form of inhibition of return operate at or after the bottleneck?},<br \/>\r\nauthor = {Raymond M. Klein and \u015eimal D\u00f6lek and John Christie},<br \/>\r\ndoi = {10.1016\/j.actpsy.2025.105944},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Acta Psychologica},<br \/>\r\nvolume = {262},<br \/>\r\npages = {1\u20138},<br \/>\r\nabstract = {Inhibition of return (IOR) refers to the longer reaction times (RTs) to targets presented at previously cued, fixated or attended locations. It has been suggested that there are two distinct forms of IOR. The input form, generated when the reflexive oculomotor system is suppressed, affects the sensory\/perceptual processing. The output form, generated when the reflexive oculomotor system is not suppressed, biases responding. It has been demonstrated, using the locus of slack logic associated with the psychological refractory period (Pashler, 1998),that the input form of IOR operates on a pre-bottleneck stage of processing, Kavyani et al. (2017). Using the same logic, Klein et al. (2020) demonstrated that the output form of IOR operates at or after the bottleneck. Building on the methods of Klein et al. the present study used PRP paradigm to determine whether the output form of IOR operates at or after the bottleneck. The output form of IOR was generated by an initial saccade from a peripheral location to a central fixation point. Task 1 consisted of a manual response indicating the location (right\/left) of a subsequent visual stimulus. Task 2 required participants to discriminate the frequency (high\/low) of an auditory stimulus and make a key-press response with their other hand. The targets (T1 and T2) for the two tasks were presented in close succession with 200, 400 and 800 ms target-target onset asynchronies (TTOAs). Responses to T1 were delayed by IOR and responses to T2 were substantially delayed when the TTOA was brief. Statistical analysis of the amount of carry over of the IOR effect experienced by Task 1 onto the RTs for Task 2 strongly suggest that the output form of IOR operates after the bottleneck. Nevertheless, aspects of the results could be interpreted to support a weaker influence of IOR operating also at the bottleneck stage of processing.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6150','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_6150\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Inhibition of return (IOR) refers to the longer reaction times (RTs) to targets presented at previously cued, fixated or attended locations. It has been suggested that there are two distinct forms of IOR. The input form, generated when the reflexive oculomotor system is suppressed, affects the sensory\/perceptual processing. The output form, generated when the reflexive oculomotor system is not suppressed, biases responding. It has been demonstrated, using the locus of slack logic associated with the psychological refractory period (Pashler, 1998),that the input form of IOR operates on a pre-bottleneck stage of processing, Kavyani et al. (2017). Using the same logic, Klein et al. (2020) demonstrated that the output form of IOR operates at or after the bottleneck. Building on the methods of Klein et al. the present study used PRP paradigm to determine whether the output form of IOR operates at or after the bottleneck. The output form of IOR was generated by an initial saccade from a peripheral location to a central fixation point. Task 1 consisted of a manual response indicating the location (right\/left) of a subsequent visual stimulus. Task 2 required participants to discriminate the frequency (high\/low) of an auditory stimulus and make a key-press response with their other hand. The targets (T1 and T2) for the two tasks were presented in close succession with 200, 400 and 800 ms target-target onset asynchronies (TTOAs). Responses to T1 were delayed by IOR and responses to T2 were substantially delayed when the TTOA was brief. Statistical analysis of the amount of carry over of the IOR effect experienced by Task 1 onto the RTs for Task 2 strongly suggest that the output form of IOR operates after the bottleneck. Nevertheless, aspects of the results could be interpreted to support a weaker influence of IOR operating also at the bottleneck stage of processing.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6150','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_6150\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.actpsy.2025.105944\" title=\"Follow DOI:10.1016\/j.actpsy.2025.105944\" target=\"_blank\">doi:10.1016\/j.actpsy.2025.105944<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6150','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Hyunwoo Kim; Kitaek Kim; Haerim Hwang<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('6068','tp_abstract')\" style=\"cursor:pointer;\">Effects of goals and strategies on predictive processing: A visual world eye-tracking study on honorific agreement in Korean<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Linguistics, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201335, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_6068\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6068','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_6068\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6068','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_6068\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6068','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_6068\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Kim2026,<br \/>\r\ntitle = {Effects of goals and strategies on predictive processing: A visual world eye-tracking study on honorific agreement in Korean},<br \/>\r\nauthor = {Hyunwoo Kim and Kitaek Kim and Haerim Hwang},<br \/>\r\ndoi = {10.1515\/ling-2024-0203},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Linguistics},<br \/>\r\npages = {1\u201335},<br \/>\r\nabstract = {There is ongoing debate about whether prediction is driven solely by bottom-up associative links or is modulated by top-down goals and strategies. The current study attempts to address this issue by investigating the role of top-down factors in Korean speakers' predictive processing of honorific agreement. Two visual-world eye-tracking experiments were conducted, analyzing participants' anticipatory eye movements while manipulating two top-down factors. In Experiment 1, we assigned participants to two groups with different instructions, asking one group to listen to sentences and answer referent-selection questions, and the other group to actively predict the upcoming referent. Experiment 2 manipulated the validity of predictive cues by interspersing experimental items with fillers containing consistent or inconsistent continuations. Results from Experiment 1 showed that participants instructed to actively anticipate the referent used honorific information more quickly to make predictions than the comprehension-only group. In Experiment 2, the group exposed to predictive linguistic stimuli showed an earlier and stronger prediction effect compared to the group exposed to stimuli with no prediction validity. These results suggest that comprehenders engage in different degrees of prediction according to the current demands of task goals and strategies. We discuss these findings in light of recent theories of predictive language processing.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6068','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_6068\" style=\"display:none;\"><div class=\"tp_abstract_entry\">There is ongoing debate about whether prediction is driven solely by bottom-up associative links or is modulated by top-down goals and strategies. The current study attempts to address this issue by investigating the role of top-down factors in Korean speakers' predictive processing of honorific agreement. Two visual-world eye-tracking experiments were conducted, analyzing participants' anticipatory eye movements while manipulating two top-down factors. In Experiment 1, we assigned participants to two groups with different instructions, asking one group to listen to sentences and answer referent-selection questions, and the other group to actively predict the upcoming referent. Experiment 2 manipulated the validity of predictive cues by interspersing experimental items with fillers containing consistent or inconsistent continuations. Results from Experiment 1 showed that participants instructed to actively anticipate the referent used honorific information more quickly to make predictions than the comprehension-only group. In Experiment 2, the group exposed to predictive linguistic stimuli showed an earlier and stronger prediction effect compared to the group exposed to stimuli with no prediction validity. These results suggest that comprehenders engage in different degrees of prediction according to the current demands of task goals and strategies. We discuss these findings in light of recent theories of predictive language processing.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6068','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_6068\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1515\/ling-2024-0203\" title=\"Follow DOI:10.1515\/ling-2024-0203\" target=\"_blank\">doi:10.1515\/ling-2024-0203<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6068','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Xin Huang; Bikalpa Ghimire; Anjani Sreeprada Chakrala; Steven Wiesner<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('5147','tp_abstract')\" style=\"cursor:pointer;\">Neural coding of multiple motion speeds in visual cortical area MT<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">eLife, <\/span><span class=\"tp_pub_additional_volume\">vol. 13, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201343, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_5147\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5147','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_5147\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5147','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_5147\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5147','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_5147\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Huang2026,<br \/>\r\ntitle = {Neural coding of multiple motion speeds in visual cortical area MT},<br \/>\r\nauthor = {Xin Huang and Bikalpa Ghimire and Anjani Sreeprada Chakrala and Steven Wiesner},<br \/>\r\ndoi = {10.7554\/eLife.94835},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {eLife},<br \/>\r\nvolume = {13},<br \/>\r\npages = {1\u201343},<br \/>\r\nabstract = {Motion speed is a salient cue for visual segmentation, yet how the visual system represents and differentiates multiple speeds remains unclear. Here, we investigated the encoding and decoding of multiple speeds. We first characterized the perceptual capacity of human and macaque subjects to segment overlapping stimuli moving at different speeds. We then determined how neurons in area MT of macaque monkeys represent multiple speeds. We found that the responses of MT neurons to two speeds showed a robust bias toward the faster speed component. This faster-speed bias occurred when both speeds were slow (\u226420\u00b0\/s) and diminished as stimulus speed increased. Our findings can be explained by a modified divisive normalization model, in which the weights for the speed components are proportional to the responses of a population of neurons (the weighting pool) with a broad range of speed preferences, elicited by the individual speeds. Regarding decoding, a classifier could distinguish MT responses to two speeds from those to a corresponding log-mean speed. We further found that it was possible to decode two speeds from the MT population response, supporting the theoretical framework of coding multiplicity in neuronal populations. The decoded speeds can account for perceptual performance in segmenting two speeds with a large (4x) but not a small (2x) separation. Our findings help define the neural coding rule of multiple speeds. The faster-speed bias in MT could benefit important behavioral tasks, such as figure-ground segregation, as figural objects tend to move faster than the background in the natural environment.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5147','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_5147\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Motion speed is a salient cue for visual segmentation, yet how the visual system represents and differentiates multiple speeds remains unclear. Here, we investigated the encoding and decoding of multiple speeds. We first characterized the perceptual capacity of human and macaque subjects to segment overlapping stimuli moving at different speeds. We then determined how neurons in area MT of macaque monkeys represent multiple speeds. We found that the responses of MT neurons to two speeds showed a robust bias toward the faster speed component. This faster-speed bias occurred when both speeds were slow (\u226420\u00b0\/s) and diminished as stimulus speed increased. Our findings can be explained by a modified divisive normalization model, in which the weights for the speed components are proportional to the responses of a population of neurons (the weighting pool) with a broad range of speed preferences, elicited by the individual speeds. Regarding decoding, a classifier could distinguish MT responses to two speeds from those to a corresponding log-mean speed. We further found that it was possible to decode two speeds from the MT population response, supporting the theoretical framework of coding multiplicity in neuronal populations. The decoded speeds can account for perceptual performance in segmenting two speeds with a large (4x) but not a small (2x) separation. Our findings help define the neural coding rule of multiple speeds. The faster-speed bias in MT could benefit important behavioral tasks, such as figure-ground segregation, as figural objects tend to move faster than the background in the natural environment.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5147','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_5147\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.7554\/eLife.94835\" title=\"Follow DOI:10.7554\/eLife.94835\" target=\"_blank\">doi:10.7554\/eLife.94835<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5147','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Zachary Hamblin-Frohman; Jay Pratt<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('4455','tp_abstract')\" style=\"cursor:pointer;\">Rapid development of inhibitory effects in response to novel features: It's mostly target-feature enhancement<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Psychonomic Bulletin &amp; Review, <\/span><span class=\"tp_pub_additional_volume\">vol. 33, <\/span><span class=\"tp_pub_additional_number\">no. 7, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201310, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_4455\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4455','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_4455\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4455','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_4455\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4455','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_4455\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{HamblinFrohman2026,<br \/>\r\ntitle = {Rapid development of inhibitory effects in response to novel features: It's mostly target-feature enhancement},<br \/>\r\nauthor = {Zachary Hamblin-Frohman and Jay Pratt},<br \/>\r\ndoi = {10.3758\/s13423-025-02815-1},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Psychonomic Bulletin & Review},<br \/>\r\nvolume = {33},<br \/>\r\nnumber = {7},<br \/>\r\npages = {1\u201310},<br \/>\r\nabstract = {In some visual search scenarios, the presence of a singleton distractor leads to faster search performance. This has been coined as the inhibition effect and is believed to represent avoidance of the singleton distractor. Research has identified two contributing components: a bias towards target features, target-feature enhancement, a bias away from distractor features, distractor-feature suppression. The current study examines how each of these effects independently develops in response to novel stimuli. In short blocks participants completed a search for a pre-defined target shape. Each block the colour of the target and the distractor were randomized so that the initial and subsequent attentional adaptations to these features could be assessed (via eye-tracking). These mini-blocks reveal substantial information about the development of the inhibition effect. Incredibly, we observe the classic inhibition effect (shorter RTs on distractor-present trials) as soon as the second trial of each block. Furthermore, the effect emerged even if it was the first presentation of the distractor feature. Gaze analysis concurs with this, eyes avoided the distractor when the target feature was known, but the distractor feature unknown. This shows compelling evidence for guidance from target-feature enhancement. However, some evidence for distractor-feature suppression is observed, further oculomotor suppression of the distractor is seen after its initial presentation. Together, the current results show that the inhibition effect develops rapidly in visual search displays, and that while a large portion of the effect can be accounted for by target-enhancement, distractor-suppression may still have a role in influencing attentional allocations.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4455','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_4455\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In some visual search scenarios, the presence of a singleton distractor leads to faster search performance. This has been coined as the inhibition effect and is believed to represent avoidance of the singleton distractor. Research has identified two contributing components: a bias towards target features, target-feature enhancement, a bias away from distractor features, distractor-feature suppression. The current study examines how each of these effects independently develops in response to novel stimuli. In short blocks participants completed a search for a pre-defined target shape. Each block the colour of the target and the distractor were randomized so that the initial and subsequent attentional adaptations to these features could be assessed (via eye-tracking). These mini-blocks reveal substantial information about the development of the inhibition effect. Incredibly, we observe the classic inhibition effect (shorter RTs on distractor-present trials) as soon as the second trial of each block. Furthermore, the effect emerged even if it was the first presentation of the distractor feature. Gaze analysis concurs with this, eyes avoided the distractor when the target feature was known, but the distractor feature unknown. This shows compelling evidence for guidance from target-feature enhancement. However, some evidence for distractor-feature suppression is observed, further oculomotor suppression of the distractor is seen after its initial presentation. Together, the current results show that the inhibition effect develops rapidly in visual search displays, and that while a large portion of the effect can be accounted for by target-enhancement, distractor-suppression may still have a role in influencing attentional allocations.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4455','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_4455\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3758\/s13423-025-02815-1\" title=\"Follow DOI:10.3758\/s13423-025-02815-1\" target=\"_blank\">doi:10.3758\/s13423-025-02815-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4455','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Carie Guan; Naomi Geller; Maya Mammon; Naiqi G. Xiao<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('4299','tp_abstract')\" style=\"cursor:pointer;\">Infants recognized other-race faces when learning them with incidental emotional sounds<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Developmental Psychobiology, <\/span><span class=\"tp_pub_additional_volume\">vol. 68, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201313, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_4299\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4299','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_4299\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4299','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_4299\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4299','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_4299\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Guan2026,<br \/>\r\ntitle = {Infants recognized other-race faces when learning them with incidental emotional sounds},<br \/>\r\nauthor = {Carie Guan and Naomi Geller and Maya Mammon and Naiqi G. Xiao},<br \/>\r\ndoi = {10.1002\/dev.70110},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Developmental Psychobiology},<br \/>\r\nvolume = {68},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201313},<br \/>\r\npublisher = {John Wiley and Sons Inc},<br \/>\r\nabstract = {Infant face recognition shows plasticity, with recent evidence indicating enhancement by the presence of emotional facial expressions. The mechanisms and domain-generality of this effect remain largely unknown. This study tested whether auditory emotional cues (vocalizations) facilitated infants' recognition of other-race faces, a perceptual challenge during the first year of life. Infants (N = 89) were presented with emotionally neutral faces paired with happy, sad, or neutral vocal sounds in a within-subjects design. Experiment 1 assessed recognition using identical face images between the familiarization and test phases, while Experiment 2 examined face recognition across viewpoint changes. Across both experiments, infants exhibited successful face recognition only when they were learned with emotional sounds (happy and sad). This facilitative effect remained stable across the tested age range and did not differ between happy and sad vocalizations. Infants' eye movement data revealed comparable face-looking patterns across conditions, suggesting that the facilitation was not driven by changes in visual attention. Thus, incidental, cross-modal emotional signals significantly enhance infant face recognition. This underscores the early integrative nature of emotion processing and its catalytic role in cognitive development.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4299','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_4299\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Infant face recognition shows plasticity, with recent evidence indicating enhancement by the presence of emotional facial expressions. The mechanisms and domain-generality of this effect remain largely unknown. This study tested whether auditory emotional cues (vocalizations) facilitated infants' recognition of other-race faces, a perceptual challenge during the first year of life. Infants (N = 89) were presented with emotionally neutral faces paired with happy, sad, or neutral vocal sounds in a within-subjects design. Experiment 1 assessed recognition using identical face images between the familiarization and test phases, while Experiment 2 examined face recognition across viewpoint changes. Across both experiments, infants exhibited successful face recognition only when they were learned with emotional sounds (happy and sad). This facilitative effect remained stable across the tested age range and did not differ between happy and sad vocalizations. Infants' eye movement data revealed comparable face-looking patterns across conditions, suggesting that the facilitation was not driven by changes in visual attention. Thus, incidental, cross-modal emotional signals significantly enhance infant face recognition. This underscores the early integrative nature of emotion processing and its catalytic role in cognitive development.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4299','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_4299\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1002\/dev.70110\" title=\"Follow DOI:10.1002\/dev.70110\" target=\"_blank\">doi:10.1002\/dev.70110<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4299','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Matthias Grabenhorst; David Poeppel; Georgios Michalareas<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('4159','tp_abstract')\" style=\"cursor:pointer;\">The anticipation of imminent events is time-scale invariant<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">PNAS, <\/span><span class=\"tp_pub_additional_volume\">vol. 123, <\/span><span class=\"tp_pub_additional_number\">no. 2, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201311, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_4159\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4159','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_4159\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4159','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_4159\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4159','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_4159\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Grabenhorst2026,<br \/>\r\ntitle = {The anticipation of imminent events is time-scale invariant},<br \/>\r\nauthor = {Matthias Grabenhorst and David Poeppel and Georgios Michalareas},<br \/>\r\ndoi = {10.1073\/pnas.2518982123},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {PNAS},<br \/>\r\nvolume = {123},<br \/>\r\nnumber = {2},<br \/>\r\npages = {1\u201311},<br \/>\r\nabstract = {Humans predict the timing of imminent events to generate fast and precise actions, decisions, and other behaviors. Such temporal anticipation is critical over wide timescales, and especially salient over the range from hundreds of milliseconds to a few seconds. Despite advances in our understanding of basic timing behavior and its underlying neural mechanisms, it remains an open question whether anticipation is stable across these short time scales. Recent work shows that the brain models the probability density function (PDF) of events across time, suggesting a canonical mechanism for temporal anticipation. Here, we investigate whether this computation holds when the event distribution covers different time spans. We show that, irrespective of the time span, anticipation, measured as reaction time, scales with the event distribution. This demonstrates that the key computation\u2014the estimation of event probability density\u2014is invariant across temporal scales. We further show that the precision of anticipation is also scale invariant which contradicts Weber's law. The results are established in vision and audition, suggesting that the core computations in temporal anticipation are independent of sensory modality. Perceptual systems exploit probability estimation over time independently of temporal scale to anticipate imminent events.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4159','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_4159\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Humans predict the timing of imminent events to generate fast and precise actions, decisions, and other behaviors. Such temporal anticipation is critical over wide timescales, and especially salient over the range from hundreds of milliseconds to a few seconds. Despite advances in our understanding of basic timing behavior and its underlying neural mechanisms, it remains an open question whether anticipation is stable across these short time scales. Recent work shows that the brain models the probability density function (PDF) of events across time, suggesting a canonical mechanism for temporal anticipation. Here, we investigate whether this computation holds when the event distribution covers different time spans. We show that, irrespective of the time span, anticipation, measured as reaction time, scales with the event distribution. This demonstrates that the key computation\u2014the estimation of event probability density\u2014is invariant across temporal scales. We further show that the precision of anticipation is also scale invariant which contradicts Weber's law. The results are established in vision and audition, suggesting that the core computations in temporal anticipation are independent of sensory modality. Perceptual systems exploit probability estimation over time independently of temporal scale to anticipate imminent events.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4159','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_4159\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1073\/pnas.2518982123\" title=\"Follow DOI:10.1073\/pnas.2518982123\" target=\"_blank\">doi:10.1073\/pnas.2518982123<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4159','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Zhushi Fu; Xiaotong Ding; Yutao Lu; Cai Xing<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('3753','tp_abstract')\" style=\"cursor:pointer;\">Physiological evidence supporting the emotional motivation account of the ending effect: Pupil diameters increase toward the end<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Gambling Studies, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201315, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_3753\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3753','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_3753\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3753','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_3753\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3753','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_3753\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Fu2026,<br \/>\r\ntitle = {Physiological evidence supporting the emotional motivation account of the ending effect: Pupil diameters increase toward the end},<br \/>\r\nauthor = {Zhushi Fu and Xiaotong Ding and Yutao Lu and Cai Xing},<br \/>\r\ndoi = {10.1007\/s10899-025-10473-0},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Journal of Gambling Studies},<br \/>\r\npages = {1\u201315},<br \/>\r\nabstract = {The phenomenon of increased risk-taking in the last round of a set of risky decision-making tasks is called the ending effect. Recent empirical studies proposed an emotional motivation account to explain the ending effect. That is, the pursuit of an emotionally satisfying ending leads to the increase of risk-taking. However, previous studies have mostly examined the ending effect at the behavioral level, there is yet no physiological evidence to examine the emotional motivation account. To fill in this gap of knowledge, the current study examined the emotional motivation account at the physiological level by recording pupil diameters, which reflect the activation of emotional motivation. Participants were randomly assigned to complete eight rounds or ten rounds of risk decision tasks while having their eyes tracked. The results showed a significant interaction between round and group on pupil diameter. Specifically, there was no significant difference between the first six rounds and the 8th round in the experimental group. For the control group, the pupil diameter of the first six rounds was significantly larger than the 8th round. Perceived end- ing may have sustained emotional arousal. This finding provides qualified physiological support for the emotional motivation account of the ending effect.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3753','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_3753\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The phenomenon of increased risk-taking in the last round of a set of risky decision-making tasks is called the ending effect. Recent empirical studies proposed an emotional motivation account to explain the ending effect. That is, the pursuit of an emotionally satisfying ending leads to the increase of risk-taking. However, previous studies have mostly examined the ending effect at the behavioral level, there is yet no physiological evidence to examine the emotional motivation account. To fill in this gap of knowledge, the current study examined the emotional motivation account at the physiological level by recording pupil diameters, which reflect the activation of emotional motivation. Participants were randomly assigned to complete eight rounds or ten rounds of risk decision tasks while having their eyes tracked. The results showed a significant interaction between round and group on pupil diameter. Specifically, there was no significant difference between the first six rounds and the 8th round in the experimental group. For the control group, the pupil diameter of the first six rounds was significantly larger than the 8th round. Perceived end- ing may have sustained emotional arousal. This finding provides qualified physiological support for the emotional motivation account of the ending effect.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3753','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_3753\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s10899-025-10473-0\" title=\"Follow DOI:10.1007\/s10899-025-10473-0\" target=\"_blank\">doi:10.1007\/s10899-025-10473-0<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3753','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Gabrielle F. Freitag; Shannon Shaughnessy; Jennifer M. Meigs; Parmis Khosravi; Julia O. Linke; Spencer C. Evans; Ellen Leibenluft; Melissa A. Brotman; Daniel S. Pine; Katharina Kircanski; Elise M. Cardinale<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('3685','tp_abstract')\" style=\"cursor:pointer;\">An investigation of inhibitory control as a mechanism differentiating tonic and phasic irritability<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Child Psychiatry &amp; Human Development, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201311, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_3685\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3685','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_3685\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3685','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_3685\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3685','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_3685\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Freitag2026,<br \/>\r\ntitle = {An investigation of inhibitory control as a mechanism differentiating tonic and phasic irritability},<br \/>\r\nauthor = {Gabrielle F. Freitag and Shannon Shaughnessy and Jennifer M. Meigs and Parmis Khosravi and Julia O. Linke and Spencer C. Evans and Ellen Leibenluft and Melissa A. Brotman and Daniel S. Pine and Katharina Kircanski and Elise M. Cardinale},<br \/>\r\ndoi = {10.1007\/s10578-025-01957-6},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Child Psychiatry & Human Development},<br \/>\r\npages = {1\u201311},<br \/>\r\nabstract = {Phasic and tonic irritability are highly correlated clinical constructs yet differentially associated with developmental trajectories and treatment response. However, limited research has identified their shared and unique underlying behavioral mechanisms. In a sample of youths enriched for irritability (N = 141, age range 7\u201318, age M[SD] = 12.60[2.54], 48.23% female), we investigated whether inhibitory control is differentially associated with phasic versus tonic irritability. Repli- cating prior work, tonic and phasic irritability were estimated via independent confirmatory factor analyses (CFAs) using items and\/or subscales from multi-informant questionnaires. A latent factor of inhibitory control was extracted from four behavioral tasks. Initial multiple linear regression analysis found that phasic, not tonic, irritability was significantly associ- ated with impaired inhibitory control. However, results were no longer significant after accounting for shared associations with age. In addition, when adding commonly co-occurring symptoms such as attention-deficit\/hyperactivity disorder (ADHD) symptoms and oppositionality, age and ADHD were significant predictors of inhibitory control, but phasic irri- tability was not. Results suggest that inhibitory control alone may not be a salient mechanism for disambiguating phasic and tonic irritability. Future work leveraging longitudinal methods and consideration of other potential contextual factors is needed.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3685','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_3685\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Phasic and tonic irritability are highly correlated clinical constructs yet differentially associated with developmental trajectories and treatment response. However, limited research has identified their shared and unique underlying behavioral mechanisms. In a sample of youths enriched for irritability (N = 141, age range 7\u201318, age M[SD] = 12.60[2.54], 48.23% female), we investigated whether inhibitory control is differentially associated with phasic versus tonic irritability. Repli- cating prior work, tonic and phasic irritability were estimated via independent confirmatory factor analyses (CFAs) using items and\/or subscales from multi-informant questionnaires. A latent factor of inhibitory control was extracted from four behavioral tasks. Initial multiple linear regression analysis found that phasic, not tonic, irritability was significantly associ- ated with impaired inhibitory control. However, results were no longer significant after accounting for shared associations with age. In addition, when adding commonly co-occurring symptoms such as attention-deficit\/hyperactivity disorder (ADHD) symptoms and oppositionality, age and ADHD were significant predictors of inhibitory control, but phasic irri- tability was not. Results suggest that inhibitory control alone may not be a salient mechanism for disambiguating phasic and tonic irritability. Future work leveraging longitudinal methods and consideration of other potential contextual factors is needed.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3685','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_3685\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s10578-025-01957-6\" title=\"Follow DOI:10.1007\/s10578-025-01957-6\" target=\"_blank\">doi:10.1007\/s10578-025-01957-6<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3685','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Olympia Colizoli; Tessa M. Leeuwen; Danaja Rutar; Harold Bekkering<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('2173','tp_abstract')\" style=\"cursor:pointer;\">Pupil dilation offers a time-window on prediction error<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">eLife, <\/span><span class=\"tp_pub_additional_volume\">vol. 14, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201344, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_2173\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2173','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_2173\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2173','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_2173\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2173','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_2173\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Colizoli2026,<br \/>\r\ntitle = {Pupil dilation offers a time-window on prediction error},<br \/>\r\nauthor = {Olympia Colizoli and Tessa M. Leeuwen and Danaja Rutar and Harold Bekkering},<br \/>\r\ndoi = {10.7554\/eLife.105287},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {eLife},<br \/>\r\nvolume = {14},<br \/>\r\npages = {1\u201344},<br \/>\r\nabstract = {Task-evoked pupil dilation is notably linked to unexpected events. Building on Z\u00e9non's (2019) information-theory framework, we investigated whether the pupil's response to feedback on decision outcomes during associative learning reflects a prediction error signal. Operationally, we defined prediction errors as an interaction between stimulus-pair frequency and accuracy. We then tested if these signals correlated with information gain, formally defined as the Kullback-Leibler (KL) divergence between posterior and prior belief distributions of an ideal observer. We reasoned that information gain should be proportional to the precision-weighted prediction error signals potentially arising from neuromodulatory arousal networks. We analyzed two data sets in which participants performed perceptual decision-making tasks while pupil dilation was recorded. Our findings consistently showed that a significant proportion of variability in the post-feedback pupil response was explained by information gain shortly after feedback presentation. For the first time, we present evidence that whether the pupil dilates or constricts along with information gain was context dependent. This study offers empirical evidence that the pupil's response provides valuable insights into the process of model updating during learning, highlighting its utility as a physiological indicator of internal belief states.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2173','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_2173\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Task-evoked pupil dilation is notably linked to unexpected events. Building on Z\u00e9non's (2019) information-theory framework, we investigated whether the pupil's response to feedback on decision outcomes during associative learning reflects a prediction error signal. Operationally, we defined prediction errors as an interaction between stimulus-pair frequency and accuracy. We then tested if these signals correlated with information gain, formally defined as the Kullback-Leibler (KL) divergence between posterior and prior belief distributions of an ideal observer. We reasoned that information gain should be proportional to the precision-weighted prediction error signals potentially arising from neuromodulatory arousal networks. We analyzed two data sets in which participants performed perceptual decision-making tasks while pupil dilation was recorded. Our findings consistently showed that a significant proportion of variability in the post-feedback pupil response was explained by information gain shortly after feedback presentation. For the first time, we present evidence that whether the pupil dilates or constricts along with information gain was context dependent. This study offers empirical evidence that the pupil's response provides valuable insights into the process of model updating during learning, highlighting its utility as a physiological indicator of internal belief states.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2173','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_2173\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.7554\/eLife.105287\" title=\"Follow DOI:10.7554\/eLife.105287\" target=\"_blank\">doi:10.7554\/eLife.105287<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2173','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Yue Cheng; Weizhen Chen<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('1963','tp_abstract')\" style=\"cursor:pointer;\">Visual preferences and place attachment construction of Generation Z tourists at sacred heritage landscapes based on eye-tracking and questionnaire<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Buildings, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201323, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1963\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1963','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_1963\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1963','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1963\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1963','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1963\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Cheng2026,<br \/>\r\ntitle = {Visual preferences and place attachment construction of Generation Z tourists at sacred heritage landscapes based on eye-tracking and questionnaire},<br \/>\r\nauthor = {Yue Cheng and Weizhen Chen},<br \/>\r\ndoi = {10.3390\/buildings16010190},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Buildings},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201323},<br \/>\r\nabstract = {Sacred heritage landscapes face significant challenges in engaging Generation Z tourists. To understand their visual processing and emotional responses, this study grounded in Cognitive Appraisal Theory (CAT), employed a mixed-methods approach with Chinese youth. Study 1 (N = 35) uses eye-tracking to examine the visual attention of Gen Z to different sacred heritage types, revealing that natural sacred sites yield the highest First Fixation Duration (FFD) and Average Fixation Duration (AFD), alongside stronger subjective preferences\u2014highlighting the role of biophilia and perceptual fluency. Study 2 constructs a moderated mediation model with a questionnaire (N = 300), identifying a \u201cNovelty \u2192 Awe \u2192 Place Attachment\u201d pathway and the moderating role of mindfulness. The research identifies the specific visual processing patterns of Gen Z and provides a psychological model for place attachment, offering empirical insights for designing intergenerationally inclusive heritage landscapes.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1963','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1963\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Sacred heritage landscapes face significant challenges in engaging Generation Z tourists. To understand their visual processing and emotional responses, this study grounded in Cognitive Appraisal Theory (CAT), employed a mixed-methods approach with Chinese youth. Study 1 (N = 35) uses eye-tracking to examine the visual attention of Gen Z to different sacred heritage types, revealing that natural sacred sites yield the highest First Fixation Duration (FFD) and Average Fixation Duration (AFD), alongside stronger subjective preferences\u2014highlighting the role of biophilia and perceptual fluency. Study 2 constructs a moderated mediation model with a questionnaire (N = 300), identifying a \u201cNovelty \u2192 Awe \u2192 Place Attachment\u201d pathway and the moderating role of mindfulness. The research identifies the specific visual processing patterns of Gen Z and provides a psychological model for place attachment, offering empirical insights for designing intergenerationally inclusive heritage landscapes.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1963','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_1963\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3390\/buildings16010190\" title=\"Follow DOI:10.3390\/buildings16010190\" target=\"_blank\">doi:10.3390\/buildings16010190<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1963','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Jui-Tai Chen; Yi-Hsuan Chang; Cesar Barquero; Chin-An Wang<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('1877','tp_abstract')\" style=\"cursor:pointer;\">Pupil dynamics reveal preparatory processes in the generation of pro-saccades and anti-saccades in open skill sports athletes<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Biology of Sport, <\/span><span class=\"tp_pub_additional_volume\">vol. 43, <\/span><span class=\"tp_pub_additional_pages\">pp. 77\u201394, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1877\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1877','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_1877\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1877','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1877\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1877','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1877\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Chen2026,<br \/>\r\ntitle = {Pupil dynamics reveal preparatory processes in the generation of pro-saccades and anti-saccades in open skill sports athletes},<br \/>\r\nauthor = {Jui-Tai Chen and Yi-Hsuan Chang and Cesar Barquero and Chin-An Wang},<br \/>\r\ndoi = {10.5114\/biolsport.2026.153308},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Biology of Sport},<br \/>\r\nvolume = {43},<br \/>\r\npages = {77\u201394},<br \/>\r\npublisher = {Termedia Sp. z.o.o.},<br \/>\r\nabstract = {This study investigated pupil dynamics to establish a physiological index of mental processes associated with executive functioning, enabling objective evaluation of cognitive load during training to improve understanding of cognitive control in sport-specific contexts. Using video-based eye-tracking, we examined pupil and saccade responses in athletes (N = 40) and non-athletes (N = 40) performing an interleaved pro- saccade and anti-saccade task. In this task, participants were instructed prior to target appearance to either make a reflexive saccade toward the target (pro-saccade) or inhibit that response and generate a voluntary saccade in the opposite direction (anti-saccade). Larger pupil dilation prior to target onset was observed during anti-saccade compared to pro-saccade preparation (p &lt; 0.001, \u03b7p\u00b2 = 0.153). Athletes showed reduced pupil dilation compared to non-athletes (p &lt; 0.05, \u03b7p\u00b2 = 0.049). In addition, trials with larger pupil dilation and smaller tonic pupil sizes were associated with faster saccade reaction times. Pupil dilation also positively correlated with saccade peak velocities but showed no association with saccade endpoint accuracy. These findings suggest that athletes may engage in more efficient motor preparation, as reflected by reduced pupil dilation. Moreover, phasic pupil dilation, indexing cognitive load, and tonic pupil size, associated with arousal level, both contributed to the control of saccade dynamics during goal-directed movements. Together, these results highlight the utility of pupil size as an objective and informative index for assessing both cognitive and arousal functions in sports science research.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1877','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1877\" style=\"display:none;\"><div class=\"tp_abstract_entry\">This study investigated pupil dynamics to establish a physiological index of mental processes associated with executive functioning, enabling objective evaluation of cognitive load during training to improve understanding of cognitive control in sport-specific contexts. Using video-based eye-tracking, we examined pupil and saccade responses in athletes (N = 40) and non-athletes (N = 40) performing an interleaved pro- saccade and anti-saccade task. In this task, participants were instructed prior to target appearance to either make a reflexive saccade toward the target (pro-saccade) or inhibit that response and generate a voluntary saccade in the opposite direction (anti-saccade). Larger pupil dilation prior to target onset was observed during anti-saccade compared to pro-saccade preparation (p &lt; 0.001, \u03b7p\u00b2 = 0.153). Athletes showed reduced pupil dilation compared to non-athletes (p &lt; 0.05, \u03b7p\u00b2 = 0.049). In addition, trials with larger pupil dilation and smaller tonic pupil sizes were associated with faster saccade reaction times. Pupil dilation also positively correlated with saccade peak velocities but showed no association with saccade endpoint accuracy. These findings suggest that athletes may engage in more efficient motor preparation, as reflected by reduced pupil dilation. Moreover, phasic pupil dilation, indexing cognitive load, and tonic pupil size, associated with arousal level, both contributed to the control of saccade dynamics during goal-directed movements. Together, these results highlight the utility of pupil size as an objective and informative index for assessing both cognitive and arousal functions in sports science research.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1877','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_1877\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.5114\/biolsport.2026.153308\" title=\"Follow DOI:10.5114\/biolsport.2026.153308\" target=\"_blank\">doi:10.5114\/biolsport.2026.153308<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1877','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Francesca Carbone; Abigail Pitt; Angela Nyhout; Stacie Friend; Murray Smith; Heather J. Ferguson<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('1643','tp_abstract')\" style=\"cursor:pointer;\">Art opening minds: An experimental study on the effects of temporal and perspectival complexity in film on open-mindedness<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Quarterly Journal of Experimental Psychology, <\/span><span class=\"tp_pub_additional_volume\">vol. 79, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 102\u2013123, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1643\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1643','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_1643\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1643','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1643\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1643','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1643\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Carbone2026,<br \/>\r\ntitle = {Art opening minds: An experimental study on the effects of temporal and perspectival complexity in film on open-mindedness},<br \/>\r\nauthor = {Francesca Carbone and Abigail Pitt and Angela Nyhout and Stacie Friend and Murray Smith and Heather J. Ferguson},<br \/>\r\ndoi = {10.1177\/17470218251333747},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Quarterly Journal of Experimental Psychology},<br \/>\r\nvolume = {79},<br \/>\r\nnumber = {1},<br \/>\r\npages = {102\u2013123},<br \/>\r\npublisher = {SAGE Publications Ltd},<br \/>\r\nabstract = {Aesthetic Cognitivism posits that artworks have the potential to enhance open-mindedness. However, this claim has not yet been explored empirically. Here, we present two experiments that investigate the extent to which two formal features of the film \u2013 temporal and perspectival complexity \u2013 can \u2018open our minds'. In Experiment 1, we manipulated the temporal complexity of the film. Participants (Ntotal = 100) watched a film (Memento) either in its original non-chronological order or the same film in chronological order. In Experiment 2, we manipulated perspectival complexity in film. Participants (Ntotal = 100) watched an excerpt from a film (Jackie Brown) that either included the perspectives of multiple characters on an event or a single character's perspective on the same event. Film conditions in both experiments were further compared with a control condition in which participants did not watch a film (N = 50). Participants' open-mindedness was assessed in both experiments through four empirical indicators (creativity, imaginability, cognitive flexibility, openness to new evidence) and in Experiment 2, participants' eye movements, heart rate and electrodermal activity were measured while watching the film. Results showed that watching films, regardless of their temporal or perspectival complexity, modulated only one facet of open-mindedness \u2013 cognitive flexibility \u2013 when compared to the no-film control condition, providing only limited support for the aesthetic cognitivist claim that artistic films can \u2018open our minds'. Real-time measures in Experiment 2 revealed that pupil size and number of fixations were modulated by perspectival complexity: both were smaller when watching a film from multiple perspectives compared to a single perspective. Possible explanations for this difference are examined in relation to the viewers' cognitive processes involved in understanding and interpreting film content.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1643','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1643\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Aesthetic Cognitivism posits that artworks have the potential to enhance open-mindedness. However, this claim has not yet been explored empirically. Here, we present two experiments that investigate the extent to which two formal features of the film \u2013 temporal and perspectival complexity \u2013 can \u2018open our minds'. In Experiment 1, we manipulated the temporal complexity of the film. Participants (Ntotal = 100) watched a film (Memento) either in its original non-chronological order or the same film in chronological order. In Experiment 2, we manipulated perspectival complexity in film. Participants (Ntotal = 100) watched an excerpt from a film (Jackie Brown) that either included the perspectives of multiple characters on an event or a single character's perspective on the same event. Film conditions in both experiments were further compared with a control condition in which participants did not watch a film (N = 50). Participants' open-mindedness was assessed in both experiments through four empirical indicators (creativity, imaginability, cognitive flexibility, openness to new evidence) and in Experiment 2, participants' eye movements, heart rate and electrodermal activity were measured while watching the film. Results showed that watching films, regardless of their temporal or perspectival complexity, modulated only one facet of open-mindedness \u2013 cognitive flexibility \u2013 when compared to the no-film control condition, providing only limited support for the aesthetic cognitivist claim that artistic films can \u2018open our minds'. Real-time measures in Experiment 2 revealed that pupil size and number of fixations were modulated by perspectival complexity: both were smaller when watching a film from multiple perspectives compared to a single perspective. Possible explanations for this difference are examined in relation to the viewers' cognitive processes involved in understanding and interpreting film content.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1643','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_1643\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1177\/17470218251333747\" title=\"Follow DOI:10.1177\/17470218251333747\" target=\"_blank\">doi:10.1177\/17470218251333747<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1643','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Huarui Cao; Lin Mu; Xuejiao Mao; Tang Yao<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('1612','tp_abstract')\" style=\"cursor:pointer;\">Effect of tourism architecture shape and self-construal<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Annals of Tourism Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 116, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201317, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1612\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1612','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_1612\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1612','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1612\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1612','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1612\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Cao2026,<br \/>\r\ntitle = {Effect of tourism architecture shape and self-construal},<br \/>\r\nauthor = {Huarui Cao and Lin Mu and Xuejiao Mao and Tang Yao},<br \/>\r\ndoi = {10.1016\/j.annals.2025.104075},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Annals of Tourism Research},<br \/>\r\nvolume = {116},<br \/>\r\npages = {1\u201317},<br \/>\r\npublisher = {Elsevier Ltd},<br \/>\r\nabstract = {The issue of whether tourists with varying characteristics exhibit distinct preferences for diverse geometric shapes of architecture remains underexplored in tourism literature. To address this gap, we drew on aesthetic distance theory and conducted eye-tracking and scenario-based experiments to examine an effect of tourism architecture shape in alignment with tourist self-construal. Our findings indicated that independent self-construal tourists favor circular-shaped architecture, while interdependent self-construal tourists prefer angular-shaped architecture. Additionally, the results confirmed the mediating role of novelty and highlighted architectural authenticity as a moderator in this effect. These insights enhance our understanding of aesthetic preferences for tourism architecture among tourists with different self-construals and provide practical recommendations for tourism industry to tailor specific architectural shapes to increase tourists' travel intentions.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1612','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1612\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The issue of whether tourists with varying characteristics exhibit distinct preferences for diverse geometric shapes of architecture remains underexplored in tourism literature. To address this gap, we drew on aesthetic distance theory and conducted eye-tracking and scenario-based experiments to examine an effect of tourism architecture shape in alignment with tourist self-construal. Our findings indicated that independent self-construal tourists favor circular-shaped architecture, while interdependent self-construal tourists prefer angular-shaped architecture. Additionally, the results confirmed the mediating role of novelty and highlighted architectural authenticity as a moderator in this effect. These insights enhance our understanding of aesthetic preferences for tourism architecture among tourists with different self-construals and provide practical recommendations for tourism industry to tailor specific architectural shapes to increase tourists' travel intentions.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1612','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_1612\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.annals.2025.104075\" title=\"Follow DOI:10.1016\/j.annals.2025.104075\" target=\"_blank\">doi:10.1016\/j.annals.2025.104075<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1612','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Mark W. Becker; Andrew Rodriguez; Derrek T. Montalvo; Chad Peltier<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('779','tp_abstract')\" style=\"cursor:pointer;\">Reducing the low-prevalence effect with probe trials<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognitive Research: Principles and Implications, <\/span><span class=\"tp_pub_additional_volume\">vol. 11, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201319, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_779\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('779','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_779\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('779','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_779\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('779','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_779\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Becker2026,<br \/>\r\ntitle = {Reducing the low-prevalence effect with probe trials},<br \/>\r\nauthor = {Mark W. Becker and Andrew Rodriguez and Derrek T. Montalvo and Chad Peltier},<br \/>\r\ndoi = {10.1186\/s41235-025-00702-w},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Cognitive Research: Principles and Implications},<br \/>\r\nvolume = {11},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201319},<br \/>\r\nabstract = {As targets become rare in visual search tasks, the likelihood of missing them increases\u2014a phenomenon known as the low-prevalence effect (LPE). This has important implications for real-world searches, but reducing the LPE has proven challenging. In Experiment 1, we used a low-prevalence T-among-Ls task and found that distributing \u201cprobe\u201d trials\u2014trials with known targets and post-response feedback\u2014reduced the LPE. In Experiment 2, participants searched for two low-prevalence targets (T and O among Ls and Qs), and we varied how often each appeared in probe trials. The probe benefit scaled with the frequency of the matching target, suggesting limited generalizability to non-probed targets. Experiment 3 used eye tracking to examine whether probes affected quitting thresholds, decision criteria, or guidance. Results showed that probes biased top-down guidance toward features of frequently probed targets, without affecting the number of items inspected or the decision criterion. In Experiment 4, we tested whether feedback was necessary for the probe benefit. Findings suggest that probes improve rare-target search by altering perceived prevalence, not through feedback alone. Overall, probes may reduce the LPE by increasing perceived prevalence and thereby increasing search guidance, but only when probe targets closely match actual search targets.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('779','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_779\" style=\"display:none;\"><div class=\"tp_abstract_entry\">As targets become rare in visual search tasks, the likelihood of missing them increases\u2014a phenomenon known as the low-prevalence effect (LPE). This has important implications for real-world searches, but reducing the LPE has proven challenging. In Experiment 1, we used a low-prevalence T-among-Ls task and found that distributing \u201cprobe\u201d trials\u2014trials with known targets and post-response feedback\u2014reduced the LPE. In Experiment 2, participants searched for two low-prevalence targets (T and O among Ls and Qs), and we varied how often each appeared in probe trials. The probe benefit scaled with the frequency of the matching target, suggesting limited generalizability to non-probed targets. Experiment 3 used eye tracking to examine whether probes affected quitting thresholds, decision criteria, or guidance. Results showed that probes biased top-down guidance toward features of frequently probed targets, without affecting the number of items inspected or the decision criterion. In Experiment 4, we tested whether feedback was necessary for the probe benefit. Findings suggest that probes improve rare-target search by altering perceived prevalence, not through feedback alone. Overall, probes may reduce the LPE by increasing perceived prevalence and thereby increasing search guidance, but only when probe targets closely match actual search targets.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('779','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_779\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1186\/s41235-025-00702-w\" title=\"Follow DOI:10.1186\/s41235-025-00702-w\" target=\"_blank\">doi:10.1186\/s41235-025-00702-w<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('779','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr>\r\n                    <td>\r\n                        <h3 class=\"tp_h3\" id=\"tp_h3_2025\">2025<\/h3>\r\n                    <\/td>\r\n                <\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Luan Zimmermann Bortoluzzi; Est\u00eav\u00e3o Carlos-Lima; Gabriela Mueller de Melo; Melissa Hongjin Song Zhu; Gustavo Rohenkohl<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('14004','tp_abstract')\" style=\"cursor:pointer;\">Presaccadic attentional shifts are not modulated by saccade amplitude<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201310, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_14004\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('14004','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_14004\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('14004','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_14004\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('14004','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_14004\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{ZimmermannBortoluzzi2025,<br \/>\r\ntitle = {Presaccadic attentional shifts are not modulated by saccade amplitude},<br \/>\r\nauthor = {Luan Zimmermann Bortoluzzi and Est\u00eav\u00e3o Carlos-Lima and Gabriela Mueller de Melo and Melissa Hongjin Song Zhu and Gustavo Rohenkohl},<br \/>\r\ndoi = {10.1038\/s41598-025-09338-8},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201310},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Humans constantly explore the visual environment through saccades, bringing relevant visual stimuli to the center of the gaze. Before the eyes begin to move, visual attention is directed to the intended saccade target. As a consequence of this presaccadic shift of attention (PSA), visual perception is enhanced at the future gaze position. PSA has been investigated in a variety of saccade amplitudes, from microsaccades to locations that exceed the oculomotor range. Interestingly, recent studies have shown that PSA effects on visual perception are not equally distributed around the visual field. However, it remains unknown whether the magnitude of presaccadic perceptual enhancement varies with the amplitude of the saccades. Here, we measured contrast sensitivity thresholds during saccade planning in a two-alternative forced-choice (2AFC) discrimination task in human observers. Filtered pink noise (1\/f) patches, presented at four eccentricities scaled in size according to the cortical magnification factor were used as visual targets. This method was adopted to mitigate well-known eccentricity effects on perception, thereby enabling us to explore the effects associated to saccade amplitudes. First, our results show that saccade preparation enhanced contrast sensitivity in all tested eccentricities. Importantly, we found that this presaccadic perceptual enhancement was not modulated by the amplitude of the saccades. These findings suggest that presaccadic attention operates consistently across different saccade amplitudes, enhancing visual processing at intended gaze positions regardless of saccade size.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('14004','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_14004\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Humans constantly explore the visual environment through saccades, bringing relevant visual stimuli to the center of the gaze. Before the eyes begin to move, visual attention is directed to the intended saccade target. As a consequence of this presaccadic shift of attention (PSA), visual perception is enhanced at the future gaze position. PSA has been investigated in a variety of saccade amplitudes, from microsaccades to locations that exceed the oculomotor range. Interestingly, recent studies have shown that PSA effects on visual perception are not equally distributed around the visual field. However, it remains unknown whether the magnitude of presaccadic perceptual enhancement varies with the amplitude of the saccades. Here, we measured contrast sensitivity thresholds during saccade planning in a two-alternative forced-choice (2AFC) discrimination task in human observers. Filtered pink noise (1\/f) patches, presented at four eccentricities scaled in size according to the cortical magnification factor were used as visual targets. This method was adopted to mitigate well-known eccentricity effects on perception, thereby enabling us to explore the effects associated to saccade amplitudes. First, our results show that saccade preparation enhanced contrast sensitivity in all tested eccentricities. Importantly, we found that this presaccadic perceptual enhancement was not modulated by the amplitude of the saccades. These findings suggest that presaccadic attention operates consistently across different saccade amplitudes, enhancing visual processing at intended gaze positions regardless of saccade size.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('14004','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_14004\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-09338-8\" title=\"Follow DOI:10.1038\/s41598-025-09338-8\" target=\"_blank\">doi:10.1038\/s41598-025-09338-8<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('14004','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Cong Zheng; Qifan Wang; He Cui<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13872','tp_abstract')\" style=\"cursor:pointer;\">Continuous sensorimotor transformation enhances robustness of neural dynamics to perturbation in macaque motor cortex<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201317, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13872\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13872','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13872\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13872','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13872\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13872','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13872\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zheng2025a,<br \/>\r\ntitle = {Continuous sensorimotor transformation enhances robustness of neural dynamics to perturbation in macaque motor cortex},<br \/>\r\nauthor = {Cong Zheng and Qifan Wang and He Cui},<br \/>\r\ndoi = {10.1038\/s41467-025-58421-1},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201317},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Neural activity in the motor cortex evolves dynamically to prepare and generate movement. Here, we investigate how motor cortical dynamics adapt to dynamic environments and whether these adaptations influence robustness against disruptions. We apply intracortical microstimulation (ICMS) in the motor cortex of monkeys performing delayed center-out reaches to either a static target (static) or a rotating target (moving) that required interception. While ICMS prolongs reaction times (RTs) in the static condition, it does not increase RTs in the moving condition, correlating with faster recovery of neural population activity post-perturbation. Neural dynamics suggests that the moving condition involves ongoing sensorimotor transformations during the delay period, whereas motor planning in the static condition is completed shortly. A neural network model shows that continuous feedback input rapidly corrects perturbation-induced errors in the moving condition. We conclude that continuous sensorimotor transformations enhance the motor cortex's resilience to perturbations, facilitating timely movement execution.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13872','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13872\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Neural activity in the motor cortex evolves dynamically to prepare and generate movement. Here, we investigate how motor cortical dynamics adapt to dynamic environments and whether these adaptations influence robustness against disruptions. We apply intracortical microstimulation (ICMS) in the motor cortex of monkeys performing delayed center-out reaches to either a static target (static) or a rotating target (moving) that required interception. While ICMS prolongs reaction times (RTs) in the static condition, it does not increase RTs in the moving condition, correlating with faster recovery of neural population activity post-perturbation. Neural dynamics suggests that the moving condition involves ongoing sensorimotor transformations during the delay period, whereas motor planning in the static condition is completed shortly. A neural network model shows that continuous feedback input rapidly corrects perturbation-induced errors in the moving condition. We conclude that continuous sensorimotor transformations enhance the motor cortex's resilience to perturbations, facilitating timely movement execution.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13872','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13872\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-58421-1\" title=\"Follow DOI:10.1038\/s41467-025-58421-1\" target=\"_blank\">doi:10.1038\/s41467-025-58421-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13872','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Tianze Zhang; Yue Xi<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13788','tp_abstract')\" style=\"cursor:pointer;\">The influences of image entropy and text direction on consumer attention: Insights from eye-tracking studies<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Psychology &amp; Marketing, <\/span><span class=\"tp_pub_additional_volume\">vol. 42, <\/span><span class=\"tp_pub_additional_number\">no. 12, <\/span><span class=\"tp_pub_additional_pages\">pp. 3266\u20133287, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13788\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13788','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13788\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13788','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13788\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13788','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13788\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zhang2025m,<br \/>\r\ntitle = {The influences of image entropy and text direction on consumer attention: Insights from eye-tracking studies},<br \/>\r\nauthor = {Tianze Zhang and Yue Xi},<br \/>\r\ndoi = {10.1002\/mar.70037},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Psychology & Marketing},<br \/>\r\nvolume = {42},<br \/>\r\nnumber = {12},<br \/>\r\npages = {3266\u20133287},<br \/>\r\npublisher = {John Wiley and Sons Inc},<br \/>\r\nabstract = {As visual content is increasingly prioritized by social media platforms, the effective interplay between image and text is critical for capturing consumer attention. This research aims to investigate the effects of two novel visual cues\u2014image entropy (disorder) and text direction\u2014and presents the concept of an image\u2013text fit effect. Through three eye-tracking experiments, we demonstrate that high-entropy (vs. low-entropy) images and vertical (vs. horizontal) text direction significantly increase consumer attention. We identify a \u201cfeeling right\u201d sense as the underlying psychological mechanism, which can be explained via time perception association. Furthermore, we examine the moderating effect of emoji intensity in social media communications on capturing consumer attention. These findings increase the theoretical understanding of visual marketing and provide actionable strategies for practitioners.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13788','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13788\" style=\"display:none;\"><div class=\"tp_abstract_entry\">As visual content is increasingly prioritized by social media platforms, the effective interplay between image and text is critical for capturing consumer attention. This research aims to investigate the effects of two novel visual cues\u2014image entropy (disorder) and text direction\u2014and presents the concept of an image\u2013text fit effect. Through three eye-tracking experiments, we demonstrate that high-entropy (vs. low-entropy) images and vertical (vs. horizontal) text direction significantly increase consumer attention. We identify a \u201cfeeling right\u201d sense as the underlying psychological mechanism, which can be explained via time perception association. Furthermore, we examine the moderating effect of emoji intensity in social media communications on capturing consumer attention. These findings increase the theoretical understanding of visual marketing and provide actionable strategies for practitioners.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13788','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13788\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1002\/mar.70037\" title=\"Follow DOI:10.1002\/mar.70037\" target=\"_blank\">doi:10.1002\/mar.70037<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13788','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Hao Zhang; Yiqing Hu; Yang Li; Shuangyu Zhang; Xiao Li Li; Chenguang Zhao<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13745','tp_abstract')\" style=\"cursor:pointer;\">Simultaneous dataset of brain, eye and hand during visuomotor tasks<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Data, <\/span><span class=\"tp_pub_additional_volume\">vol. 12, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201315, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13745\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13745','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13745\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13745','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13745\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13745','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13745\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zhang2025f,<br \/>\r\ntitle = {Simultaneous dataset of brain, eye and hand during visuomotor tasks},<br \/>\r\nauthor = {Hao Zhang and Yiqing Hu and Yang Li and Shuangyu Zhang and Xiao Li Li and Chenguang Zhao},<br \/>\r\ndoi = {10.1038\/s41597-024-04227-7},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Data},<br \/>\r\nvolume = {12},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201315},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Visuomotor integration is a complex skill set encompassing many fundamental abilities, such as visual search, attention monitoring, and motor control. To explore the dynamic interplay between visual inputs and motor outputs, it is necessary to simultaneously record multiple brain activities with high temporal and spatial resolution, as well as to record implicit and explicit behaviors. However, there is a lack of public datasets that provide simultaneous multiple modalities during a visual-motor task. Functional near-infrared spectroscopy and electroencephalography to record brain activity simultaneously facilitate more precise capture of the complex visuomotor of brain mechanisms. Additionally, by employing a combined eye movement and manual response, it is possible to fully evaluate the effects of visuomotor outputs from implicit and explicit dimensions. We recorded whole-brain EEG (34 electrodes) and fNIRS (44 channels) covering the frontal and parietal cortex along with eye movements, behavior sampling, and operant behavior. The dataset underwent rigorous synchronization, quality control to highlight the effectiveness of our experiments and to demonstrate the high quality of our multimodal data framework.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13745','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13745\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Visuomotor integration is a complex skill set encompassing many fundamental abilities, such as visual search, attention monitoring, and motor control. To explore the dynamic interplay between visual inputs and motor outputs, it is necessary to simultaneously record multiple brain activities with high temporal and spatial resolution, as well as to record implicit and explicit behaviors. However, there is a lack of public datasets that provide simultaneous multiple modalities during a visual-motor task. Functional near-infrared spectroscopy and electroencephalography to record brain activity simultaneously facilitate more precise capture of the complex visuomotor of brain mechanisms. Additionally, by employing a combined eye movement and manual response, it is possible to fully evaluate the effects of visuomotor outputs from implicit and explicit dimensions. We recorded whole-brain EEG (34 electrodes) and fNIRS (44 channels) covering the frontal and parietal cortex along with eye movements, behavior sampling, and operant behavior. The dataset underwent rigorous synchronization, quality control to highlight the effectiveness of our experiments and to demonstrate the high quality of our multimodal data framework.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13745','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13745\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41597-024-04227-7\" title=\"Follow DOI:10.1038\/s41597-024-04227-7\" target=\"_blank\">doi:10.1038\/s41597-024-04227-7<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13745','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Zhao Zeng; Ce Zhang; Yue Xu; Hua He; Yong Gu<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13691','tp_abstract')\" style=\"cursor:pointer;\">Distinct neural population code and causal roles of primate caudate nucleus in multimodal decision-making<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13691\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13691','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13691\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13691','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13691\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13691','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13691\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Zeng2025b,<br \/>\r\ntitle = {Distinct neural population code and causal roles of primate caudate nucleus in multimodal decision-making},<br \/>\r\nauthor = {Zhao Zeng and Ce Zhang and Yue Xu and Hua He and Yong Gu},<br \/>\r\ndoi = {10.1038\/s41467-025-60504-y},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201316},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Perceptual decision-making involves distributed networks spanning both association cortices and subcortical areas. A fundamental question is whether such a network is highly redundant, or each node is distinct with unique function. Using a visuo-vestibular decision-making task, here we show the subcortical caudate nucleus (CN) of male primates displays distinct population code compared to association cortices along the modality dimension. Specifically, in a low-dimensional state subspace, neural trajectory in the frontal and posterior-parietal association cortical activity during multimodal-stimulus condition evolves along the visual trajectory, whereas along the vestibular trajectory in the CN. We then show CN population activity is consistent with the animal's behavioral strategy employed within a generalized drift-diffusion framework. Importantly, causal-link experiments, including application of GABAa-receptor agonist, D1-receptor antagonist, and electrical microstimulation, further confirmed CN's critical contributions to perceptual behavior. Our results confirm CN's vital importance to decision making in complex environments with multimodal information.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13691','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13691\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Perceptual decision-making involves distributed networks spanning both association cortices and subcortical areas. A fundamental question is whether such a network is highly redundant, or each node is distinct with unique function. Using a visuo-vestibular decision-making task, here we show the subcortical caudate nucleus (CN) of male primates displays distinct population code compared to association cortices along the modality dimension. Specifically, in a low-dimensional state subspace, neural trajectory in the frontal and posterior-parietal association cortical activity during multimodal-stimulus condition evolves along the visual trajectory, whereas along the vestibular trajectory in the CN. We then show CN population activity is consistent with the animal's behavioral strategy employed within a generalized drift-diffusion framework. Importantly, causal-link experiments, including application of GABAa-receptor agonist, D1-receptor antagonist, and electrical microstimulation, further confirmed CN's critical contributions to perceptual behavior. Our results confirm CN's vital importance to decision making in complex environments with multimodal information.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13691','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13691\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-60504-y\" title=\"Follow DOI:10.1038\/s41467-025-60504-y\" target=\"_blank\">doi:10.1038\/s41467-025-60504-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13691','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Zinong Yang; Stephanie D. Williams; Ewa Beldzik; Stephanie Anakwe; Emilia Schimmelpfennig; Laura D. Lewis<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13459','tp_abstract')\" style=\"cursor:pointer;\">Attentional failures after sleep deprivation are locked to joint neurovascular, pupil and cerebrospinal fluid flow dynamics<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Neuroscience, <\/span><span class=\"tp_pub_additional_pages\">pp. 2526\u20132536, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13459\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13459','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13459\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13459','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13459\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13459','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13459\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Yang2025e,<br \/>\r\ntitle = {Attentional failures after sleep deprivation are locked to joint neurovascular, pupil and cerebrospinal fluid flow dynamics},<br \/>\r\nauthor = {Zinong Yang and Stephanie D. Williams and Ewa Beldzik and Stephanie Anakwe and Emilia Schimmelpfennig and Laura D. Lewis},<br \/>\r\ndoi = {10.1038\/s41593-025-02098-8},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Neuroscience},<br \/>\r\npages = {2526\u20132536},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Sleep deprivation rapidly disrupts cognitive function and in the long term contributes to neurological disease. Why sleep deprivation has such profound effects on cognition is not well understood. Here we use simultaneous fast fMRI\u2013EEG to test how sleep deprivation modulates cognitive, neural and fluid dynamics in the human brain. We demonstrate that attentional failures during wakefulness after sleep deprivation are tightly orchestrated in a series of brain\u2013body changes, including neuronal shifts, pupil constriction and cerebrospinal fluid (CSF) flow pulsations, pointing to a coupled system of fluid dynamics and neuromodulatory state. CSF flow and hemodynamics are coupled to attentional function within the awake state, with CSF pulsations following attentional impairment. The timing of these dynamics is consistent with a vascular mechanism regulated by neuromodulatory state. The attentional costs of sleep deprivation may thus reflect an irrepressible need for rest periods driven by a central neuromodulatory system that regulates both neuronal and fluid physiology.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13459','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13459\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Sleep deprivation rapidly disrupts cognitive function and in the long term contributes to neurological disease. Why sleep deprivation has such profound effects on cognition is not well understood. Here we use simultaneous fast fMRI\u2013EEG to test how sleep deprivation modulates cognitive, neural and fluid dynamics in the human brain. We demonstrate that attentional failures during wakefulness after sleep deprivation are tightly orchestrated in a series of brain\u2013body changes, including neuronal shifts, pupil constriction and cerebrospinal fluid (CSF) flow pulsations, pointing to a coupled system of fluid dynamics and neuromodulatory state. CSF flow and hemodynamics are coupled to attentional function within the awake state, with CSF pulsations following attentional impairment. The timing of these dynamics is consistent with a vascular mechanism regulated by neuromodulatory state. The attentional costs of sleep deprivation may thus reflect an irrepressible need for rest periods driven by a central neuromodulatory system that regulates both neuronal and fluid physiology.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13459','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13459\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41593-025-02098-8\" title=\"Follow DOI:10.1038\/s41593-025-02098-8\" target=\"_blank\">doi:10.1038\/s41593-025-02098-8<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13459','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Yu Fang Yang; Matthias Gamer<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13454','tp_abstract')\" style=\"cursor:pointer;\">Facial features associated with fear and happiness attract gaze during brief exposure without enhancing emotion recognition<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201315, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13454\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13454','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13454\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13454','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13454\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13454','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13454\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Yang2025d,<br \/>\r\ntitle = {Facial features associated with fear and happiness attract gaze during brief exposure without enhancing emotion recognition},<br \/>\r\nauthor = {Yu Fang Yang and Matthias Gamer},<br \/>\r\ndoi = {10.1038\/s41598-025-12327-6},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201315},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Facial features transmit emotions but their effect on visual orienting and explicit emotion recognition is debated. Here we examined whether fixating on diagnostic features of emotional expressions\u2014such as eye region for fear and the mouth for happiness\u2014affects saccadic targeting and improves recognition accuracy. Across two pre-registered experiments, participants viewed fearful, happy, and neutral faces for short intervals (50 or 150 ms) while the initial fixation location was manipulated. Although such brief stimulation does not allow for visual exploration, the faces still elicited reflexive saccades that occurred after stimulus offset. These saccades were modulated by the emotional expressions indicating a consistent preferential saccadic orienting towards diagnostic features, even with limited exposure. As this effect disappeared for inverted faces, it can be attributed to an extrafoveal processing of facial features instead of an attentional orienting towards physically salient image regions. Participants' recognition accuracy was unaffected by the foveated facial feature, but this observation might also be due to ceiling effects in performance. Collectively, these findings contribute to understanding the attentional mechanisms of feature-based processing in the perception of emotional facial expressions.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13454','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13454\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Facial features transmit emotions but their effect on visual orienting and explicit emotion recognition is debated. Here we examined whether fixating on diagnostic features of emotional expressions\u2014such as eye region for fear and the mouth for happiness\u2014affects saccadic targeting and improves recognition accuracy. Across two pre-registered experiments, participants viewed fearful, happy, and neutral faces for short intervals (50 or 150 ms) while the initial fixation location was manipulated. Although such brief stimulation does not allow for visual exploration, the faces still elicited reflexive saccades that occurred after stimulus offset. These saccades were modulated by the emotional expressions indicating a consistent preferential saccadic orienting towards diagnostic features, even with limited exposure. As this effect disappeared for inverted faces, it can be attributed to an extrafoveal processing of facial features instead of an attentional orienting towards physically salient image regions. Participants' recognition accuracy was unaffected by the foveated facial feature, but this observation might also be due to ceiling effects in performance. Collectively, these findings contribute to understanding the attentional mechanisms of feature-based processing in the perception of emotional facial expressions.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13454','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13454\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-12327-6\" title=\"Follow DOI:10.1038\/s41598-025-12327-6\" target=\"_blank\">doi:10.1038\/s41598-025-12327-6<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13454','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Xiaojuan Xue; Gilles Pourtois<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13346','tp_abstract')\" style=\"cursor:pointer;\">Neurophysiological evidence for emotional attention modulation depending on goal relevance<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13346\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13346','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13346\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13346','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13346\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13346','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13346\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Xue2025b,<br \/>\r\ntitle = {Neurophysiological evidence for emotional attention modulation depending on goal relevance},<br \/>\r\nauthor = {Xiaojuan Xue and Gilles Pourtois},<br \/>\r\ndoi = {10.1038\/s41598-025-96537-y},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201316},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Although threat-related stimuli can capture attention automatically, recent findings have challenged this assumption by showing that goal rather than threat can be prioritized and eventually guide attentional control. In this study, we used high density electroencephalography (EEG) in 40 participants while peripheral emotional faces (either fear or happiness) were either goal-relevant or irrelevant during a dot-probe task (DPT). The use of peripheral vision was established by eye-tracking. Both the face specific N170 component and the subsequent Early Posterior Negativity (EPN) were enhanced by fear at the cue level, yet the latter one only when fear was goal relevant. Importantly, we found that early on following target onset at the P1 level, both value and goal relevance drove spatial attention and interacted with each other such that when they were goal-relevant, fearful faces captured attention less than when they were not. These results suggest that emotional attention is flexible and it can be influenced by the goal relevance of emotion. Moreover, they shed light on the electrophysiological manifestations of this flexibility and dovetail with the assumption that sensory gain control effects occurring in the visual cortex depending on attentional control are multiplexed and determined by both value and goal.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13346','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13346\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Although threat-related stimuli can capture attention automatically, recent findings have challenged this assumption by showing that goal rather than threat can be prioritized and eventually guide attentional control. In this study, we used high density electroencephalography (EEG) in 40 participants while peripheral emotional faces (either fear or happiness) were either goal-relevant or irrelevant during a dot-probe task (DPT). The use of peripheral vision was established by eye-tracking. Both the face specific N170 component and the subsequent Early Posterior Negativity (EPN) were enhanced by fear at the cue level, yet the latter one only when fear was goal relevant. Importantly, we found that early on following target onset at the P1 level, both value and goal relevance drove spatial attention and interacted with each other such that when they were goal-relevant, fearful faces captured attention less than when they were not. These results suggest that emotional attention is flexible and it can be influenced by the goal relevance of emotion. Moreover, they shed light on the electrophysiological manifestations of this flexibility and dovetail with the assumption that sensory gain control effects occurring in the visual cortex depending on attentional control are multiplexed and determined by both value and goal.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13346','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13346\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-96537-y\" title=\"Follow DOI:10.1038\/s41598-025-96537-y\" target=\"_blank\">doi:10.1038\/s41598-025-96537-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13346','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Jia-Jie Xu; Jun-Yi Chen; Hong-Zhou Xu; Zhiwei Zheng; Jing Yu<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13316','tp_abstract')\" style=\"cursor:pointer;\">The role of inhibitory function in associative memory among older adults and its plasticity<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognitive Research: Principles and Implications, <\/span><span class=\"tp_pub_additional_volume\">vol. 10, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201320, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13316\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13316','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13316\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13316','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13316\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13316','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13316\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Xu2025,<br \/>\r\ntitle = {The role of inhibitory function in associative memory among older adults and its plasticity},<br \/>\r\nauthor = {Jia-Jie Xu and Jun-Yi Chen and Hong-Zhou Xu and Zhiwei Zheng and Jing Yu},<br \/>\r\ndoi = {10.1186\/s41235-025-00688-5},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Cognitive Research: Principles and Implications},<br \/>\r\nvolume = {10},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201320},<br \/>\r\npublisher = {Springer Science and Business Media Deutschland GmbH},<br \/>\r\nabstract = {Associative memory deteriorates with age. One possible reason for this associative memory deficit in older adults is a decline in inhibitory function. However, it remains unclear what role of inhibitory function plays in age-related associative memory deficits, and whether and how acute training of inhibitory function could ameliorate the detrimental effects of inhibitory deficits on associative memory in older adults. In Experiment 1, 80 participants (40 younger and 40 older adults) studied scene-word pairs while attempting to inhibit interfering words during encoding, with two conditions: gist and non-gist interferences. In Experiment 2, 66 older adults were randomly assigned to either acute inhibitory training or a control group, and eye-tracking technology was used to capture the benefits of acute inhibitory training. Results showed that older adults were more disturbed by gist than non-gist interferences because of hyper-binding, and that inhibitory function mediated the relationship between age and associative memory accuracy. Notably, although acute inhibitory training did not significantly improve associative memory accuracy in the training group compared to the control group, structural equation model showed that older adults in the acute training group showed shorter fixation durations and lower frequencies in the interference region of interest, leading to better associative memory. These results indicate that inhibitory function plays a mediating role in age-related associative memory decline, as well as its plasticity in this association. It provides a potential pathway to improve associative memory in older adults.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13316','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13316\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Associative memory deteriorates with age. One possible reason for this associative memory deficit in older adults is a decline in inhibitory function. However, it remains unclear what role of inhibitory function plays in age-related associative memory deficits, and whether and how acute training of inhibitory function could ameliorate the detrimental effects of inhibitory deficits on associative memory in older adults. In Experiment 1, 80 participants (40 younger and 40 older adults) studied scene-word pairs while attempting to inhibit interfering words during encoding, with two conditions: gist and non-gist interferences. In Experiment 2, 66 older adults were randomly assigned to either acute inhibitory training or a control group, and eye-tracking technology was used to capture the benefits of acute inhibitory training. Results showed that older adults were more disturbed by gist than non-gist interferences because of hyper-binding, and that inhibitory function mediated the relationship between age and associative memory accuracy. Notably, although acute inhibitory training did not significantly improve associative memory accuracy in the training group compared to the control group, structural equation model showed that older adults in the acute training group showed shorter fixation durations and lower frequencies in the interference region of interest, leading to better associative memory. These results indicate that inhibitory function plays a mediating role in age-related associative memory decline, as well as its plasticity in this association. It provides a potential pathway to improve associative memory in older adults.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13316','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13316\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1186\/s41235-025-00688-5\" title=\"Follow DOI:10.1186\/s41235-025-00688-5\" target=\"_blank\">doi:10.1186\/s41235-025-00688-5<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13316','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Jackie Wai Yi Wo; Weiyan Liao; Janet Hui Hsiao<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13144','tp_abstract')\" style=\"cursor:pointer;\">Impact of mask use on facial emotion recognition in individuals with subclinical social anxiety: An eye-tracking study<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognitive Research: Principles and Implications, <\/span><span class=\"tp_pub_additional_volume\">vol. 10, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201318, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13144\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13144','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13144\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13144','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13144\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13144','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13144\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Wo2025,<br \/>\r\ntitle = {Impact of mask use on facial emotion recognition in individuals with subclinical social anxiety: An eye-tracking study},<br \/>\r\nauthor = {Jackie Wai Yi Wo and Weiyan Liao and Janet Hui Hsiao},<br \/>\r\ndoi = {10.1186\/s41235-025-00635-4},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Cognitive Research: Principles and Implications},<br \/>\r\nvolume = {10},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201318},<br \/>\r\npublisher = {Springer Science and Business Media Deutschland GmbH},<br \/>\r\nabstract = {Previous studies suggested that social anxiety is associated with interpretation bias, theory of mind deficit, and eye gaze avoidance when identifying facial emotions. We tested the hypothesis that socially anxious individuals would be more affected by mask use during facial emotion recognition. 88 healthy undergraduates with various levels of social anxiety were invited. Participants judged the emotions of masked and unmasked facial expressions. Eye Movement Analysis with Hidden Markov Models was used to analyze participants' eye movement patterns during the task. Potential confounders including gender, depressive symptoms, stress, and executive planning ability were controlled for in the analyses. Results failed to support our hypothesis. Instead, higher social anxiety was associated with higher accuracy rates for angry and fearful faces and lower false alarm rates for sad faces. Eye movement patterns were similar across social anxiety levels. Interestingly, an exploratory moderation analysis revealed that an increase in using a more eye-centered strategy due to mask use was significantly associated with a larger drop in accuracy rate for fearful faces among individuals with higher social anxiety, while non-significantly associated with a smaller drop among individuals with lower social anxiety. Thus, our study indicates social anxiety, at least at subclinical levels, may be associated with a generally heightened sensitivity to negative emotions. However, such heightened sensitivity diminishes if they switch to a more eye-centered strategy when viewing masked facial emotions. Potential mechanisms and implications were discussed.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13144','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13144\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Previous studies suggested that social anxiety is associated with interpretation bias, theory of mind deficit, and eye gaze avoidance when identifying facial emotions. We tested the hypothesis that socially anxious individuals would be more affected by mask use during facial emotion recognition. 88 healthy undergraduates with various levels of social anxiety were invited. Participants judged the emotions of masked and unmasked facial expressions. Eye Movement Analysis with Hidden Markov Models was used to analyze participants' eye movement patterns during the task. Potential confounders including gender, depressive symptoms, stress, and executive planning ability were controlled for in the analyses. Results failed to support our hypothesis. Instead, higher social anxiety was associated with higher accuracy rates for angry and fearful faces and lower false alarm rates for sad faces. Eye movement patterns were similar across social anxiety levels. Interestingly, an exploratory moderation analysis revealed that an increase in using a more eye-centered strategy due to mask use was significantly associated with a larger drop in accuracy rate for fearful faces among individuals with higher social anxiety, while non-significantly associated with a smaller drop among individuals with lower social anxiety. Thus, our study indicates social anxiety, at least at subclinical levels, may be associated with a generally heightened sensitivity to negative emotions. However, such heightened sensitivity diminishes if they switch to a more eye-centered strategy when viewing masked facial emotions. Potential mechanisms and implications were discussed.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13144','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13144\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1186\/s41235-025-00635-4\" title=\"Follow DOI:10.1186\/s41235-025-00635-4\" target=\"_blank\">doi:10.1186\/s41235-025-00635-4<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13144','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Iris Wiegand; Mariska Pouderoijen; Joukje M. Oosterman; Kay Deckers; Gernot Horstmann; Mariska Van Pouderoijen; Joukje M. Oosterman; Kay Deckers; Gernot Horstmann<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('13023','tp_abstract')\" style=\"cursor:pointer;\">Contributions of distractor dwelling , skipping , and revisiting to age differences in visual search<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201328, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13023\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13023','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13023\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13023','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13023\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13023','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13023\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Wiegand2025,<br \/>\r\ntitle = {Contributions of distractor dwelling , skipping , and revisiting to age differences in visual search},<br \/>\r\nauthor = {Iris Wiegand and Mariska Pouderoijen and Joukje M. Oosterman and Kay Deckers and Gernot Horstmann and Mariska Van Pouderoijen and Joukje M. Oosterman and Kay Deckers and Gernot Horstmann},<br \/>\r\ndoi = {10.1038\/s41598-024-83532-y},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201328},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Visual search becomes slower with aging, particularly when targets are difficult to discriminate from distractors. Multiple distractor rejection processes may contribute independently to slower search times: dwelling on, skipping of, and revisiting of distractors, measurable by eye-tracking. The present study investigated how age affects each of the distractor rejection processes, and how these contribute to the final search times in difficult (inefficient) visual search. In a sample of Dutch healthy adults (19\u201385 years), we measured reaction times and eye-movements during a target present\/absent visual search task, with varying target-distractor similarity and visual set size. We found that older age was associated with longer dwelling and more revisiting of distractors, while skipping was unaffected by age. This suggests that increased processing time and reduced visuo-spatial memory for visited distractor locations contribute to age-related decline in visual search. Furthermore, independently of age, dwelling and revisiting contributed stronger to search times than skipping of distractors. In conclusion, under conditions of poor guidance, dwelling and revisiting have a major contribution to search times and age-related slowing in difficult visual search, while skipping is largely negligible.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13023','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13023\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Visual search becomes slower with aging, particularly when targets are difficult to discriminate from distractors. Multiple distractor rejection processes may contribute independently to slower search times: dwelling on, skipping of, and revisiting of distractors, measurable by eye-tracking. The present study investigated how age affects each of the distractor rejection processes, and how these contribute to the final search times in difficult (inefficient) visual search. In a sample of Dutch healthy adults (19\u201385 years), we measured reaction times and eye-movements during a target present\/absent visual search task, with varying target-distractor similarity and visual set size. We found that older age was associated with longer dwelling and more revisiting of distractors, while skipping was unaffected by age. This suggests that increased processing time and reduced visuo-spatial memory for visited distractor locations contribute to age-related decline in visual search. Furthermore, independently of age, dwelling and revisiting contributed stronger to search times than skipping of distractors. In conclusion, under conditions of poor guidance, dwelling and revisiting have a major contribution to search times and age-related slowing in difficult visual search, while skipping is largely negligible.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13023','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13023\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-024-83532-y\" title=\"Follow DOI:10.1038\/s41598-024-83532-y\" target=\"_blank\">doi:10.1038\/s41598-024-83532-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13023','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Bayley M. Wellons; Christopher N. Wahlheim<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12921','tp_abstract')\" style=\"cursor:pointer;\">Misinformation reminders enhance belief updating and memory for corrections: The role of attention during encoding revealed by eye tracking<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognitive Research: Principles and Implications, <\/span><span class=\"tp_pub_additional_volume\">vol. 10, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201322, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12921\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12921','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12921\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12921','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12921\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12921','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12921\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Wellons2025,<br \/>\r\ntitle = {Misinformation reminders enhance belief updating and memory for corrections: The role of attention during encoding revealed by eye tracking},<br \/>\r\nauthor = {Bayley M. Wellons and Christopher N. Wahlheim},<br \/>\r\ndoi = {10.1186\/s41235-025-00649-y},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Cognitive Research: Principles and Implications},<br \/>\r\nvolume = {10},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201322},<br \/>\r\npublisher = {Springer Science and Business Media Deutschland GmbH},<br \/>\r\nabstract = {Misinformation exposure can cause inaccurate beliefs and memories. These unwanted outcomes can be mitigated when misinformation reminders\u2014veracity-labeled statements that repeat earlier-read false information\u2014appear before corrections with true information. The present experiment used eye tracking to examine the role of attention while encoding corrective details in the beneficial effects of reminder-based corrections. Participants read headlines in a belief-updating task that included a within-subjects manipulation of correction format. They first rated the familiarity and veracity of true and false headlines (Phase 1). Then, they read true headlines that corrected false headlines or affirmed true headlines (Phase 2). The true headlines appeared (1) without veracity labels, (2) with veracity labels, or (3) with misinformation reminders and veracity labels. Finally, participants re-rated the veracity of the Phase 1 headlines and rated their memory for whether those headlines were corrected in Phase 2 (Phase 3). Reminder-based corrections led to the greatest reduction in false beliefs, best high confidence recognition of corrections, and earliest eye fixations to the true details of corrections during encoding in Phase 2. Corrections remembered with the highest confidence rating were associated with more and earlier fixations to true details in correction statements in Phase 2. Collectively, these results suggest that misinformation reminders directed attention to corrective details, which improved encoding and subsequent memory for veracity information. These results have applied implications in suggesting that optimal correction formats should include features that direct attention to, and thus support encoding of, the contrast between false and true information.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12921','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12921\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Misinformation exposure can cause inaccurate beliefs and memories. These unwanted outcomes can be mitigated when misinformation reminders\u2014veracity-labeled statements that repeat earlier-read false information\u2014appear before corrections with true information. The present experiment used eye tracking to examine the role of attention while encoding corrective details in the beneficial effects of reminder-based corrections. Participants read headlines in a belief-updating task that included a within-subjects manipulation of correction format. They first rated the familiarity and veracity of true and false headlines (Phase 1). Then, they read true headlines that corrected false headlines or affirmed true headlines (Phase 2). The true headlines appeared (1) without veracity labels, (2) with veracity labels, or (3) with misinformation reminders and veracity labels. Finally, participants re-rated the veracity of the Phase 1 headlines and rated their memory for whether those headlines were corrected in Phase 2 (Phase 3). Reminder-based corrections led to the greatest reduction in false beliefs, best high confidence recognition of corrections, and earliest eye fixations to the true details of corrections during encoding in Phase 2. Corrections remembered with the highest confidence rating were associated with more and earlier fixations to true details in correction statements in Phase 2. Collectively, these results suggest that misinformation reminders directed attention to corrective details, which improved encoding and subsequent memory for veracity information. These results have applied implications in suggesting that optimal correction formats should include features that direct attention to, and thus support encoding of, the contrast between false and true information.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12921','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12921\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1186\/s41235-025-00649-y\" title=\"Follow DOI:10.1186\/s41235-025-00649-y\" target=\"_blank\">doi:10.1186\/s41235-025-00649-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12921','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">\u00c1gnes Welker; Orsolya Pet\u0151-Plaszk\u00f3; Luca Vereb\u00e9lyi; Ferenc Gombos; Istv\u00e1n Winkler; Ilona Kov\u00e1cs<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12919','tp_abstract')\" style=\"cursor:pointer;\">Neurodiversity in mental simulation: Conceptual but not visual imagery priming modulates perception across the imagery vividness spectrum<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201312, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12919\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12919','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12919\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12919','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12919\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12919','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12919\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Welker2025,<br \/>\r\ntitle = {Neurodiversity in mental simulation: Conceptual but not visual imagery priming modulates perception across the imagery vividness spectrum},<br \/>\r\nauthor = {\u00c1gnes Welker and Orsolya Pet\u0151-Plaszk\u00f3 and Luca Vereb\u00e9lyi and Ferenc Gombos and Istv\u00e1n Winkler and Ilona Kov\u00e1cs},<br \/>\r\ndoi = {10.1038\/s41598-025-05100-2},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201312},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Mental simulation\u2014the ability to internally model sensory, conceptual, or future events\u2014may include mental imagery as a component, with considerable individual variability in its vividness and dependence on sensory detail. While self-reports have been widely used to assess imagery, they are subjective and prone to bias. Among more objective methods, imagery priming in binocular rivalry has been employed to investigate the influence of mental imagery on perception, but findings have been ambiguous. Here, we introduce a no-report version of the task, using eye-tracking-based optokinetic nystagmus assessment to provide a more reliable measure of perceptual shifts. In addition to visual imagery priming, we introduce conceptual priming, which does not rely on sensory imagery but engages abstract representations. In visual imagery priming, perceptual modulation correlated with self-reported vividness, and participants with low vividness did not show modulatory effects. However, in conceptual priming, effects were observed across the entire vividness spectrum, demonstrating that both concrete sensory-based and abstract conceptual representations can influence perception. These findings challenge purely sensory accounts of mental imagery. We propose avoiding deficit-based terms such as \u201caphantasia\u201d and advocate for a neuroaffirmative perspective on mental simulation diversity.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12919','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12919\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Mental simulation\u2014the ability to internally model sensory, conceptual, or future events\u2014may include mental imagery as a component, with considerable individual variability in its vividness and dependence on sensory detail. While self-reports have been widely used to assess imagery, they are subjective and prone to bias. Among more objective methods, imagery priming in binocular rivalry has been employed to investigate the influence of mental imagery on perception, but findings have been ambiguous. Here, we introduce a no-report version of the task, using eye-tracking-based optokinetic nystagmus assessment to provide a more reliable measure of perceptual shifts. In addition to visual imagery priming, we introduce conceptual priming, which does not rely on sensory imagery but engages abstract representations. In visual imagery priming, perceptual modulation correlated with self-reported vividness, and participants with low vividness did not show modulatory effects. However, in conceptual priming, effects were observed across the entire vividness spectrum, demonstrating that both concrete sensory-based and abstract conceptual representations can influence perception. These findings challenge purely sensory accounts of mental imagery. We propose avoiding deficit-based terms such as \u201caphantasia\u201d and advocate for a neuroaffirmative perspective on mental simulation diversity.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12919','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12919\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-05100-2\" title=\"Follow DOI:10.1038\/s41598-025-05100-2\" target=\"_blank\">doi:10.1038\/s41598-025-05100-2<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12919','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">B\u00e9la Weiss; Annam\u00e1ria Manga; \u00c1d\u00e1m N\u00e1rai; Ad\u00e9l Bihari; Judit Zsuga; Zolt\u00e1n Vidny\u00e1nszky<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12906','tp_abstract')\" style=\"cursor:pointer;\">Reward boosts cognitive control during working memory maintenance<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12906\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12906','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12906\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12906','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12906\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12906','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12906\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Weiss2025,<br \/>\r\ntitle = {Reward boosts cognitive control during working memory maintenance},<br \/>\r\nauthor = {B\u00e9la Weiss and Annam\u00e1ria Manga and \u00c1d\u00e1m N\u00e1rai and Ad\u00e9l Bihari and Judit Zsuga and Zolt\u00e1n Vidny\u00e1nszky},<br \/>\r\ndoi = {10.1038\/s41598-025-09949-1},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Working memory (WM) involves short-term maintenance and manipulation of goal-relevant information, with cognitive control playing a crucial role in these processes due to WM's limited capacity. Pupillometry studies show distinct pupillary changes for WM stages, reflecting cognitive effort and load. Motivational incentives enhance WM performance by potentially improving encoding, maintenance, or retrieval, though the specific components influenced by reward remain unclear. This study specifically tested whether reward modulates cognitive control processes during WM maintenance using pupillometry. Participants performed a delayed-estimation orientation WM task with reward cues indicating reward levels at the beginning of trials. The results revealed that motivational incentives significantly improved WM performance and increased pupillary dilation during maintenance. These findings provide evidence for the modulation of WM maintenance by reward through enhanced top-down cognitive control processes.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12906','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12906\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Working memory (WM) involves short-term maintenance and manipulation of goal-relevant information, with cognitive control playing a crucial role in these processes due to WM's limited capacity. Pupillometry studies show distinct pupillary changes for WM stages, reflecting cognitive effort and load. Motivational incentives enhance WM performance by potentially improving encoding, maintenance, or retrieval, though the specific components influenced by reward remain unclear. This study specifically tested whether reward modulates cognitive control processes during WM maintenance using pupillometry. Participants performed a delayed-estimation orientation WM task with reward cues indicating reward levels at the beginning of trials. The results revealed that motivational incentives significantly improved WM performance and increased pupillary dilation during maintenance. These findings provide evidence for the modulation of WM maintenance by reward through enhanced top-down cognitive control processes.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12906','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12906\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-09949-1\" title=\"Follow DOI:10.1038\/s41598-025-09949-1\" target=\"_blank\">doi:10.1038\/s41598-025-09949-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12906','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Hanliang Wei; Tak Lam; Weijian Liu; Waxun Su; Zheng Wang; Qiandong Wang; Xiao Lin; Peng Li<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12883','tp_abstract')\" style=\"cursor:pointer;\">Initial and sustained attentional bias toward emotional faces in patients with major depressive disorder<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Eye Movement Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 18, <\/span><span class=\"tp_pub_additional_number\">no. 6, <\/span><span class=\"tp_pub_additional_pages\">pp. 72, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12883\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12883','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12883\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12883','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12883\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12883','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12883\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Wei2025,<br \/>\r\ntitle = {Initial and sustained attentional bias toward emotional faces in patients with major depressive disorder},<br \/>\r\nauthor = {Hanliang Wei and Tak Lam and Weijian Liu and Waxun Su and Zheng Wang and Qiandong Wang and Xiao Lin and Peng Li},<br \/>\r\ndoi = {10.3390\/jemr18060072},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Journal of Eye Movement Research},<br \/>\r\nvolume = {18},<br \/>\r\nnumber = {6},<br \/>\r\npages = {72},<br \/>\r\nabstract = {Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of these biases. This study employed eye-tracking technology to systematically examine the attentional processing of emotional faces (happy, fearful, sad) in MDD patients (n = 61) versus healthy controls (HC},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12883','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12883\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of these biases. This study employed eye-tracking technology to systematically examine the attentional processing of emotional faces (happy, fearful, sad) in MDD patients (n = 61) versus healthy controls (HC<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12883','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12883\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3390\/jemr18060072\" title=\"Follow DOI:10.3390\/jemr18060072\" target=\"_blank\">doi:10.3390\/jemr18060072<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12883','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Sara Jane Webb; Brian Kwan; Raphael Bernier; Katarzyna Charwarska; Geraldine Dawson; James Dziura; Susan Faja; Gerhard Hellmann; Shafali Jeste; Natalia Kleinhans; April Levin; Adam Naples; Maura Sabatos-DeVito; Damla \u015eent\u00fcrk; Frederick Shic; Catherine Sugar; James C. McPartland; Autism Biomarkers Consortium for Clinical Trials<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12861','tp_abstract')\" style=\"cursor:pointer;\">Face perception, attention, and memory as predictors of social change in autistic children<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Neurodevelopmental Disorders, <\/span><span class=\"tp_pub_additional_volume\">vol. 17, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20139, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12861\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12861','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12861\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12861','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12861\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12861','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12861\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Webb2025,<br \/>\r\ntitle = {Face perception, attention, and memory as predictors of social change in autistic children},<br \/>\r\nauthor = {Sara Jane Webb and Brian Kwan and Raphael Bernier and Katarzyna Charwarska and Geraldine Dawson and James Dziura and Susan Faja and Gerhard Hellmann and Shafali Jeste and Natalia Kleinhans and April Levin and Adam Naples and Maura Sabatos-DeVito and Damla \u015eent\u00fcrk and Frederick Shic and Catherine Sugar and James C. McPartland and Autism Biomarkers Consortium for Clinical Trials},<br \/>\r\ndoi = {10.1186\/s11689-025-09646-0},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Journal of Neurodevelopmental Disorders},<br \/>\r\nvolume = {17},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u20139},<br \/>\r\npublisher = {BioMed Central Ltd},<br \/>\r\nabstract = {Objective: Social perception and attention markers have been identified that, on average, differentiate autistic from non-autistic children. However, little is known about how these markers predict behavior over time at both short and long time intervals. Methods: We conducted a large multisite, naturalistic study of 6- to 11-year-old children diagnosed with ASD (n = 214). We evaluated three markers of social processing: social perception via the ERP N170 Latency to Upright Faces; social attention via the Eye Tracking (ET) OMI (Oculomotor Index of Gaze to Human Faces) that captures percent looking to faces from three tasks; and social cognition via the NEPSY Face Memory task. Each was evaluated in predicting social ability and autistic social behaviors derived from parental interviews and questionnaires about child behavior at + 6 months (T3) and + 4 years (T4). Results: Adjusting for baseline performance, time between measurements, age, and sex, our results suggest differential prognostic relations for each of the markers. The ERP N170 Latency to Upright Faces showed limited prognostic relations, with a significant relation to short term changes in face memory. The ET OMI was related to face memory over both short and long term. Both the ET OMI and Face Memory predicted long-term autistic social behavior scores. Conclusions: In the context of a large-scale, rigorous evaluation of candidate markers for use in future clinical trials, our primary markers had significant but small-effect prognostic capability. The ET OMI and Face Memory showed significant long-term predictive relations, with increased visual attention to faces and better face memory at baseline related to increased social approach and decreased autistic social behaviors 4 years later.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12861','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12861\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Objective: Social perception and attention markers have been identified that, on average, differentiate autistic from non-autistic children. However, little is known about how these markers predict behavior over time at both short and long time intervals. Methods: We conducted a large multisite, naturalistic study of 6- to 11-year-old children diagnosed with ASD (n = 214). We evaluated three markers of social processing: social perception via the ERP N170 Latency to Upright Faces; social attention via the Eye Tracking (ET) OMI (Oculomotor Index of Gaze to Human Faces) that captures percent looking to faces from three tasks; and social cognition via the NEPSY Face Memory task. Each was evaluated in predicting social ability and autistic social behaviors derived from parental interviews and questionnaires about child behavior at + 6 months (T3) and + 4 years (T4). Results: Adjusting for baseline performance, time between measurements, age, and sex, our results suggest differential prognostic relations for each of the markers. The ERP N170 Latency to Upright Faces showed limited prognostic relations, with a significant relation to short term changes in face memory. The ET OMI was related to face memory over both short and long term. Both the ET OMI and Face Memory predicted long-term autistic social behavior scores. Conclusions: In the context of a large-scale, rigorous evaluation of candidate markers for use in future clinical trials, our primary markers had significant but small-effect prognostic capability. The ET OMI and Face Memory showed significant long-term predictive relations, with increased visual attention to faces and better face memory at baseline related to increased social approach and decreased autistic social behaviors 4 years later.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12861','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12861\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1186\/s11689-025-09646-0\" title=\"Follow DOI:10.1186\/s11689-025-09646-0\" target=\"_blank\">doi:10.1186\/s11689-025-09646-0<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12861','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Xin Wang; Shitao Chen; Keyang Wang; Liyu Cao<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12763','tp_abstract')\" style=\"cursor:pointer;\">Predicted action-effects shape action representation through pre-activation of alpha oscillations<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Communications Biology, <\/span><span class=\"tp_pub_additional_volume\">vol. 8, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201311, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12763\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12763','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12763\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12763','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12763\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12763','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12763\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Wang2025n,<br \/>\r\ntitle = {Predicted action-effects shape action representation through pre-activation of alpha oscillations},<br \/>\r\nauthor = {Xin Wang and Shitao Chen and Keyang Wang and Liyu Cao},<br \/>\r\ndoi = {10.1038\/s42003-025-07750-4},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Communications Biology},<br \/>\r\nvolume = {8},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201311},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Actions are typically accompanied by sensory feedback (or action-effects). Action-effects, in turn, influence the action. Theoretical accounts of action control assume a pre-activation of action-effects prior to action execution. Here we show that when participants were asked to report the time of their voluntary keypress using the position of a fast-rotating clock hand, a predictable action-effect (i.e. a 250 ms delayed sound after keypress) led to a shift of visuospatial attention towards the clock hand position of action-effect onset, thus demonstrating an influence of action-effects on action representation. Importantly, the attention shift occurred about 1 second before the action execution, which was further preceded and predicted by a lateralisation of alpha oscillations in the visual cortex. Our results indicate that when the spatial location is the key feature of action-effects, the neural implementation of the action-effect pre-activation is achieved through alpha lateralisation.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12763','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12763\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Actions are typically accompanied by sensory feedback (or action-effects). Action-effects, in turn, influence the action. Theoretical accounts of action control assume a pre-activation of action-effects prior to action execution. Here we show that when participants were asked to report the time of their voluntary keypress using the position of a fast-rotating clock hand, a predictable action-effect (i.e. a 250 ms delayed sound after keypress) led to a shift of visuospatial attention towards the clock hand position of action-effect onset, thus demonstrating an influence of action-effects on action representation. Importantly, the attention shift occurred about 1 second before the action execution, which was further preceded and predicted by a lateralisation of alpha oscillations in the visual cortex. Our results indicate that when the spatial location is the key feature of action-effects, the neural implementation of the action-effect pre-activation is achieved through alpha lateralisation.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12763','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12763\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s42003-025-07750-4\" title=\"Follow DOI:10.1038\/s42003-025-07750-4\" target=\"_blank\">doi:10.1038\/s42003-025-07750-4<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12763','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Carla A. Wall; Kayla Smith; Frederick Shic; Bridgette Kelleher; Abigail Hogan; Elizabeth A. Will; Jane E. Roberts<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12591','tp_abstract')\" style=\"cursor:pointer;\">Heart rate defined sustained attention relates to visual attention in autism and fragile X syndrome<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20139, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12591\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12591','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12591\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12591','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12591\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12591','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12591\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Wall2025b,<br \/>\r\ntitle = {Heart rate defined sustained attention relates to visual attention in autism and fragile X syndrome},<br \/>\r\nauthor = {Carla A. Wall and Kayla Smith and Frederick Shic and Bridgette Kelleher and Abigail Hogan and Elizabeth A. Will and Jane E. Roberts},<br \/>\r\ndoi = {10.1038\/s41598-025-09537-3},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u20139},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Social attention, including shared attention and social orienting, is essential for positive social interactions. Although early visual social attention is often quantified using eye tracking, these indices may not consistently reflect cognitive engagement. Heart rate defined sustained attention (HRDSA) is a physiological measure that can index cognitive engagement alongside visual attention, leading to more comprehensive assessments of attentional processes that are particularly important in young, neurodiverse children with high support needs, including those with autism and fragile X syndrome (FXS). The present study examined visual and heart-defined measures of social attention to the Selective Social Attention task, a video-based assay of social attention, in children with autism, FXS, and neurotypical development. Linear mixed models examined group and condition effects in multiple cardiac indices and overall looking at the scene. Findings suggest that, overall, children across all groups engaged similarly across the experiment in most dimensions of HRDSA, and consistent with previous work, autistic children spent less time visually attending to the scene than either other group. HRDSA was positively associated with visual social attention. Combining physiological and visual attention measures may elucidate the complex nature of social attention and be especially valuable for neurodiverse children when typical assessments are inaccessible.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12591','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12591\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Social attention, including shared attention and social orienting, is essential for positive social interactions. Although early visual social attention is often quantified using eye tracking, these indices may not consistently reflect cognitive engagement. Heart rate defined sustained attention (HRDSA) is a physiological measure that can index cognitive engagement alongside visual attention, leading to more comprehensive assessments of attentional processes that are particularly important in young, neurodiverse children with high support needs, including those with autism and fragile X syndrome (FXS). The present study examined visual and heart-defined measures of social attention to the Selective Social Attention task, a video-based assay of social attention, in children with autism, FXS, and neurotypical development. Linear mixed models examined group and condition effects in multiple cardiac indices and overall looking at the scene. Findings suggest that, overall, children across all groups engaged similarly across the experiment in most dimensions of HRDSA, and consistent with previous work, autistic children spent less time visually attending to the scene than either other group. HRDSA was positively associated with visual social attention. Combining physiological and visual attention measures may elucidate the complex nature of social attention and be especially valuable for neurodiverse children when typical assessments are inaccessible.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12591','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12591\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-09537-3\" title=\"Follow DOI:10.1038\/s41598-025-09537-3\" target=\"_blank\">doi:10.1038\/s41598-025-09537-3<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12591','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Preeti Verghese; Adrien Chopin; \u00c2ngela Gomes-Tomaz; Noelia G. Alcalde; Dennis M. Levi<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12392','tp_abstract')\" style=\"cursor:pointer;\">Vergence anomalies are associated with impaired stereopsis in amblyopia<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Vision Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 237, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12392\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12392','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12392\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12392','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12392\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12392','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12392\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Verghese2025,<br \/>\r\ntitle = {Vergence anomalies are associated with impaired stereopsis in amblyopia},<br \/>\r\nauthor = {Preeti Verghese and Adrien Chopin and \u00c2ngela Gomes-Tomaz and Noelia G. Alcalde and Dennis M. Levi},<br \/>\r\ndoi = {10.1016\/j.visres.2025.108696},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Vision Research},<br \/>\r\nvolume = {237},<br \/>\r\npages = {1\u201316},<br \/>\r\npublisher = {Elsevier Ltd},<br \/>\r\nabstract = {We examined the relationship between stereopsis and fusional vergence in groups of amblyopic and stereo-normal control observers. As absolute disparity is thought to be the basis for relative disparity and for disparity-driven vergence, we hypothesized that vergence anomalies would be accompanied by impaired stereopsis. Specifically, we examined whether patterns of impaired stereopsis across the central 20\u00b0 of the visual field were accompanied by impaired fusional vergence for stimuli confined to these regions. Stereopsis was measured locally across the visual field with disparity steps of 5 to 20 arcmin. Fusional vergence to large disparity steps (2 to 3\u00b0) was measured with binocular eye tracking. The vergence stimuli were random dot stereograms, in one of 3 spatial configurations: a large disc 16\u00b0 in diameter, a small disc 4\u00b0 in diameter, and an annulus with outer and inner diameters corresponding to the large and small discs. Of the controls (n = 25) with no history of abnormal visual development, 12 individuals exhibited normal stereopsis across the visual field and normal vergence gains for all configurations. Thirteen individuals with weak stereopsis in the central field tended to have anomalous vergence for small stimuli, but normal vergence for larger stimuli. Amblyopic\/strabismic individuals (n = 12) had poor stereopsis and poor vergence for small stimuli. We report a strong correlation between vergence, coarse and fine stereopsis, with no double dissociation (no cases of impaired vergence with normal stereopsis). Taken together, the results suggest that compromised binocular interaction is the cause of both stereopsis and vergence deficits.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12392','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12392\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We examined the relationship between stereopsis and fusional vergence in groups of amblyopic and stereo-normal control observers. As absolute disparity is thought to be the basis for relative disparity and for disparity-driven vergence, we hypothesized that vergence anomalies would be accompanied by impaired stereopsis. Specifically, we examined whether patterns of impaired stereopsis across the central 20\u00b0 of the visual field were accompanied by impaired fusional vergence for stimuli confined to these regions. Stereopsis was measured locally across the visual field with disparity steps of 5 to 20 arcmin. Fusional vergence to large disparity steps (2 to 3\u00b0) was measured with binocular eye tracking. The vergence stimuli were random dot stereograms, in one of 3 spatial configurations: a large disc 16\u00b0 in diameter, a small disc 4\u00b0 in diameter, and an annulus with outer and inner diameters corresponding to the large and small discs. Of the controls (n = 25) with no history of abnormal visual development, 12 individuals exhibited normal stereopsis across the visual field and normal vergence gains for all configurations. Thirteen individuals with weak stereopsis in the central field tended to have anomalous vergence for small stimuli, but normal vergence for larger stimuli. Amblyopic\/strabismic individuals (n = 12) had poor stereopsis and poor vergence for small stimuli. We report a strong correlation between vergence, coarse and fine stereopsis, with no double dissociation (no cases of impaired vergence with normal stereopsis). Taken together, the results suggest that compromised binocular interaction is the cause of both stereopsis and vergence deficits.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12392','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12392\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.visres.2025.108696\" title=\"Follow DOI:10.1016\/j.visres.2025.108696\" target=\"_blank\">doi:10.1016\/j.visres.2025.108696<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12392','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Micha\u00ebl Vanhoyland; Peter Janssen; Tom Theys<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12326','tp_abstract')\" style=\"cursor:pointer;\">Single-neuron correlates of visual consciousness in human lateral occipital complex<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201317, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12326\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12326','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12326\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12326','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12326\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12326','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12326\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Vanhoyland2025,<br \/>\r\ntitle = {Single-neuron correlates of visual consciousness in human lateral occipital complex},<br \/>\r\nauthor = {Micha\u00ebl Vanhoyland and Peter Janssen and Tom Theys},<br \/>\r\ndoi = {10.1038\/s41467-025-67077-w},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201317},<br \/>\r\nabstract = {Conscious perception, a critical aspect of human cognition, is assumed to emerge from a complex network of interacting brain regions that transmit information via feedforward and recurrent pathways. This study presents single- and multiunit recordings from the human lateral occipital complex (LO), a key region for shape and object recognition, during three distinct perceptual paradigms: backward masking, flash suppression and binocular rivalry. Stimulus awareness increased decoding accuracy and decoders assigned higher probabilities to the consciously perceived stimulus during periods of dichoptic stimulus presentation. These findings highlight the intricate neural mechanisms underlying visual awareness and show that LO responses predominantly align with subjective phenomenology, offering new insights into the neural correlates of visual consciousness.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12326','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12326\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Conscious perception, a critical aspect of human cognition, is assumed to emerge from a complex network of interacting brain regions that transmit information via feedforward and recurrent pathways. This study presents single- and multiunit recordings from the human lateral occipital complex (LO), a key region for shape and object recognition, during three distinct perceptual paradigms: backward masking, flash suppression and binocular rivalry. Stimulus awareness increased decoding accuracy and decoders assigned higher probabilities to the consciously perceived stimulus during periods of dichoptic stimulus presentation. These findings highlight the intricate neural mechanisms underlying visual awareness and show that LO responses predominantly align with subjective phenomenology, offering new insights into the neural correlates of visual consciousness.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12326','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12326\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-67077-w\" title=\"Follow DOI:10.1038\/s41467-025-67077-w\" target=\"_blank\">doi:10.1038\/s41467-025-67077-w<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12326','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Sandra Tyralla; Eckart Zimmermann<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12054','tp_abstract')\" style=\"cursor:pointer;\">Serial dependencies and overt attention shifts<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Vision, <\/span><span class=\"tp_pub_additional_volume\">vol. 25, <\/span><span class=\"tp_pub_additional_number\">no. 14, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12054\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12054','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12054\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12054','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12054\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12054','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12054\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Tyralla2025,<br \/>\r\ntitle = {Serial dependencies and overt attention shifts},<br \/>\r\nauthor = {Sandra Tyralla and Eckart Zimmermann},<br \/>\r\ndoi = {10.1167\/jov.25.14.12},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Journal of Vision},<br \/>\r\nvolume = {25},<br \/>\r\nnumber = {14},<br \/>\r\npages = {1\u201316},<br \/>\r\nabstract = {When visual input is uncertain, visual perception is biased toward the stimulation from the recent past. We can attend to stimuli either endogenously based on an internal decision or exogenously, triggered by an external event. Here, we wondered whether serial dependencies are selective for the attentional mode which we draw to stimuli. We studied overt attention shifts: saccades and recorded either motor error correction or visual orientation judgments. In Experiment 1, we assessed sensorimotor serial dependencies, focusing on how the postsaccadic error influences subsequent saccade amplitudes. In Experiment 2, we evaluated visual serial dependencies by measuring orientation judgments, contingent on the type of saccade performed. In separate sessions, participants performed either only voluntary saccades or only delayed saccades, or both saccade types alternated within a session. Our results revealed that sensorimotor serial dependencies were selective for the saccade type performed. When voluntary saccades had been performed in the preceding trial, serial dependencies were much stronger in the current trial if voluntary instead of delayed saccades were executed. In contrast, visual serial dependencies were not influenced by the type of saccade performed. Our findings reveal that shifts in exogenous and endogenous attention differentially impact sensorimotor serial dependencies, but visual serial dependencies remain unaffected.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12054','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12054\" style=\"display:none;\"><div class=\"tp_abstract_entry\">When visual input is uncertain, visual perception is biased toward the stimulation from the recent past. We can attend to stimuli either endogenously based on an internal decision or exogenously, triggered by an external event. Here, we wondered whether serial dependencies are selective for the attentional mode which we draw to stimuli. We studied overt attention shifts: saccades and recorded either motor error correction or visual orientation judgments. In Experiment 1, we assessed sensorimotor serial dependencies, focusing on how the postsaccadic error influences subsequent saccade amplitudes. In Experiment 2, we evaluated visual serial dependencies by measuring orientation judgments, contingent on the type of saccade performed. In separate sessions, participants performed either only voluntary saccades or only delayed saccades, or both saccade types alternated within a session. Our results revealed that sensorimotor serial dependencies were selective for the saccade type performed. When voluntary saccades had been performed in the preceding trial, serial dependencies were much stronger in the current trial if voluntary instead of delayed saccades were executed. In contrast, visual serial dependencies were not influenced by the type of saccade performed. Our findings reveal that shifts in exogenous and endogenous attention differentially impact sensorimotor serial dependencies, but visual serial dependencies remain unaffected.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12054','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12054\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1167\/jov.25.14.12\" title=\"Follow DOI:10.1167\/jov.25.14.12\" target=\"_blank\">doi:10.1167\/jov.25.14.12<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12054','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Ekin T\u00fcn\u00e7ok; Marisa Carrasco; Jonathan Winawer<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12023','tp_abstract')\" style=\"cursor:pointer;\">Spatial attention selectively alters visual cortical representation during target anticipation<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201319, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_12023\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12023','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_12023\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12023','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12023\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12023','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12023\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Tuencok2025,<br \/>\r\ntitle = {Spatial attention selectively alters visual cortical representation during target anticipation},<br \/>\r\nauthor = {Ekin T\u00fcn\u00e7ok and Marisa Carrasco and Jonathan Winawer},<br \/>\r\ndoi = {10.1038\/s41467-025-63795-3},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201319},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Attention enables us to efficiently and flexibly interact with the environment by prioritizing specific image locations and features in preparation for responding to stimuli. Using a concurrent psychophysics\u2013fMRI experiment, we investigate how covert spatial attention modulates responses in human visual cortex before target onset and how it affects subsequent behavioral performance. Performance improves at cued locations and worsens at uncued locations compared to distributed attention, demonstrating a selective processing tradeoff. Pre-target BOLD responses in cortical visual field maps reveal two key changes: First, a stimulus-independent baseline shift, with increases near cued locations and decreases elsewhere, paralleling behavioral results. Second, a shift in population receptive field centers toward the attended location. Both effects increase in higher visual areas. Together, these findings reveal that spatial attention has large effects on visual cortex prior to target appearance, altering neural response properties across multiple visual field maps and enhancing performance through anticipatory mechanisms.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12023','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_12023\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Attention enables us to efficiently and flexibly interact with the environment by prioritizing specific image locations and features in preparation for responding to stimuli. Using a concurrent psychophysics\u2013fMRI experiment, we investigate how covert spatial attention modulates responses in human visual cortex before target onset and how it affects subsequent behavioral performance. Performance improves at cued locations and worsens at uncued locations compared to distributed attention, demonstrating a selective processing tradeoff. Pre-target BOLD responses in cortical visual field maps reveal two key changes: First, a stimulus-independent baseline shift, with increases near cued locations and decreases elsewhere, paralleling behavioral results. Second, a shift in population receptive field centers toward the attended location. Both effects increase in higher visual areas. Together, these findings reveal that spatial attention has large effects on visual cortex prior to target appearance, altering neural response properties across multiple visual field maps and enhancing performance through anticipatory mechanisms.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12023','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12023\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-63795-3\" title=\"Follow DOI:10.1038\/s41467-025-63795-3\" target=\"_blank\">doi:10.1038\/s41467-025-63795-3<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12023','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Tobiasz Trawi\u0144ski; Chuanli Zang; Letizia Palumbo; Nick Donnelly<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11932','tp_abstract')\" style=\"cursor:pointer;\">Individuating experience moderates the effect of implicit racial bias on eye movements to other race faces: A cross-cultural study<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201312, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11932\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11932','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11932\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11932','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11932\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11932','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11932\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Trawinski2025,<br \/>\r\ntitle = {Individuating experience moderates the effect of implicit racial bias on eye movements to other race faces: A cross-cultural study},<br \/>\r\nauthor = {Tobiasz Trawi\u0144ski and Chuanli Zang and Letizia Palumbo and Nick Donnelly},<br \/>\r\ndoi = {10.1038\/s41598-025-13272-0},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201312},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {The present cross-cultural study investigated gaze behaviour in the context of assessing the aesthetic value of figurative paintings depicting White and East Asian individuals in social scenes. Across three experiments, we examined how implicit racial attitudes and self-reported individuating experiences influenced gaze patterns when participants evaluated their liking of these paintings. Despite no requirement to inspect faces in the paintings, the results revealed that participants with negative implicit attitudes toward other-race individuals and limited individuating experience with those groups, spent more time fixating on other-race faces. This relationship between implicit attitudes and individuating experience in guiding gaze behaviour was consistent across both British and Chinese participants, despite differing definitions of same- and other-race faces between the groups. Our findings suggest that gaze behaviour during the aesthetic evaluation of figurative paintings is shaped by an interaction between attitudinal and experiential factors, which operates across cultural contexts.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11932','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11932\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The present cross-cultural study investigated gaze behaviour in the context of assessing the aesthetic value of figurative paintings depicting White and East Asian individuals in social scenes. Across three experiments, we examined how implicit racial attitudes and self-reported individuating experiences influenced gaze patterns when participants evaluated their liking of these paintings. Despite no requirement to inspect faces in the paintings, the results revealed that participants with negative implicit attitudes toward other-race individuals and limited individuating experience with those groups, spent more time fixating on other-race faces. This relationship between implicit attitudes and individuating experience in guiding gaze behaviour was consistent across both British and Chinese participants, despite differing definitions of same- and other-race faces between the groups. Our findings suggest that gaze behaviour during the aesthetic evaluation of figurative paintings is shaped by an interaction between attitudinal and experiential factors, which operates across cultural contexts.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11932','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11932\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-13272-0\" title=\"Follow DOI:10.1038\/s41598-025-13272-0\" target=\"_blank\">doi:10.1038\/s41598-025-13272-0<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11932','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Catharina Tibken; Simon P. Tiffin-Richards<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11840','tp_abstract')\" style=\"cursor:pointer;\">Reading behavior as an indicator of comprehension monitoring when reading expository texts<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Metacognition and Learning, <\/span><span class=\"tp_pub_additional_volume\">vol. 20, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201329, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11840\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11840','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11840\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11840','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11840\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11840','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11840\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Tibken2025,<br \/>\r\ntitle = {Reading behavior as an indicator of comprehension monitoring when reading expository texts},<br \/>\r\nauthor = {Catharina Tibken and Simon P. Tiffin-Richards},<br \/>\r\ndoi = {10.1007\/s11409-025-09440-2},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Metacognition and Learning},<br \/>\r\nvolume = {20},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201329},<br \/>\r\npublisher = {Springer},<br \/>\r\nabstract = {Comprehension of expository texts is an important prerequisite for self-regulated learning. Processes of passive validation and metacognitive monitoring are thought to be involved in building a coherent situation model of a text. Inconsistency tasks are often used to measure these processes. Several studies have shown longer reading times for inconsistent sentences than for consistent sentences. However, it remains unclear whether the additional time arises from passive disruptions of the reading process when encountering an inconsistency or from metacognitive processes of reanalysis of previous text. To address this issue, we recorded the reading behavior of 96 university students with an eye-tracker while they read inconsistent and consistent expository texts. We analyzed first-pass reading (first-pass reading time, lookbacks) and reanalysis (rereading time, revisits) at the level of the (in)consistent target word, at the sentence-final word of the target sentence, and in the pre-target text. Our results did not strongly support the hypothesis that immediate changes in reading behavior when inconsistencies are first encountered influence the detection and processing of inconsistencies. Our results partially supported the hypothesis that processes of text reanalysis, specifically of the source of inconsistency, increase the probability of identifying an inconsistency. The findings indicate that a purposeful reanalysis of passages that appear inconsistent to readers improves situation model construction for (short) expository texts about conceptually difficult topics. Learning from texts thus requires metacognitive comprehension monitoring beyond passive validation processes.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11840','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11840\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Comprehension of expository texts is an important prerequisite for self-regulated learning. Processes of passive validation and metacognitive monitoring are thought to be involved in building a coherent situation model of a text. Inconsistency tasks are often used to measure these processes. Several studies have shown longer reading times for inconsistent sentences than for consistent sentences. However, it remains unclear whether the additional time arises from passive disruptions of the reading process when encountering an inconsistency or from metacognitive processes of reanalysis of previous text. To address this issue, we recorded the reading behavior of 96 university students with an eye-tracker while they read inconsistent and consistent expository texts. We analyzed first-pass reading (first-pass reading time, lookbacks) and reanalysis (rereading time, revisits) at the level of the (in)consistent target word, at the sentence-final word of the target sentence, and in the pre-target text. Our results did not strongly support the hypothesis that immediate changes in reading behavior when inconsistencies are first encountered influence the detection and processing of inconsistencies. Our results partially supported the hypothesis that processes of text reanalysis, specifically of the source of inconsistency, increase the probability of identifying an inconsistency. The findings indicate that a purposeful reanalysis of passages that appear inconsistent to readers improves situation model construction for (short) expository texts about conceptually difficult topics. Learning from texts thus requires metacognitive comprehension monitoring beyond passive validation processes.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11840','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11840\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s11409-025-09440-2\" title=\"Follow DOI:10.1007\/s11409-025-09440-2\" target=\"_blank\">doi:10.1007\/s11409-025-09440-2<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11840','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Zhongbin Su; Xiaolin Zhou; Stefan Pollmann; Lihui Wang<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11487','tp_abstract')\" style=\"cursor:pointer;\">Dynamic face-related eye movement representations in the human ventral pathway<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Communications Biology, <\/span><span class=\"tp_pub_additional_volume\">vol. 8, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201312, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11487\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11487','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11487\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11487','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11487\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11487','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11487\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Su2025c,<br \/>\r\ntitle = {Dynamic face-related eye movement representations in the human ventral pathway},<br \/>\r\nauthor = {Zhongbin Su and Xiaolin Zhou and Stefan Pollmann and Lihui Wang},<br \/>\r\ndoi = {10.1038\/s42003-025-09039-y},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Communications Biology},<br \/>\r\nvolume = {8},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201312},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Multiple brain areas along the ventral pathway have been known to represent face images. Here, in a magnetoencephalography (MEG) experiment, we show dynamic representations of face-related eye movements in the ventral pathway in the absence of image perception. Participants followed a dot presented on a uniform background, the movement of which represented gaze tracks acquired previously during their free-viewing of face and house pictures. We found a dominant role of the ventral stream in representing face-related gaze tracks, starting from the orbitofrontal cortex (OFC) and anterior temporal lobe (ATL), and extending to the medial temporal and ventral occipitotemporal cortex. Our findings show that the ventral pathway represents the gaze tracks used to explore faces, by which top-down prediction of face category in OFC and ATL may guide, via the medial temporal cortex or directly, face perception in the ventral occipitotemporal cortex.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11487','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11487\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Multiple brain areas along the ventral pathway have been known to represent face images. Here, in a magnetoencephalography (MEG) experiment, we show dynamic representations of face-related eye movements in the ventral pathway in the absence of image perception. Participants followed a dot presented on a uniform background, the movement of which represented gaze tracks acquired previously during their free-viewing of face and house pictures. We found a dominant role of the ventral stream in representing face-related gaze tracks, starting from the orbitofrontal cortex (OFC) and anterior temporal lobe (ATL), and extending to the medial temporal and ventral occipitotemporal cortex. Our findings show that the ventral pathway represents the gaze tracks used to explore faces, by which top-down prediction of face category in OFC and ATL may guide, via the medial temporal cortex or directly, face perception in the ventral occipitotemporal cortex.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11487','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11487\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s42003-025-09039-y\" title=\"Follow DOI:10.1038\/s42003-025-09039-y\" target=\"_blank\">doi:10.1038\/s42003-025-09039-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11487','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Renana Storm; Viktoria Wrobel; Antonia Frings; Andreas Sprenger; Christoph Helmchen<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11442','tp_abstract')\" style=\"cursor:pointer;\">Functional brain activity in persistent postural-perceptual dizziness (PPPD) during galvanic vestibular stimulation reveals sensitization in the multisensory vestibular cortical network<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201311, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11442\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11442','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11442\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11442','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11442\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11442','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11442\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Storm2025,<br \/>\r\ntitle = {Functional brain activity in persistent postural-perceptual dizziness (PPPD) during galvanic vestibular stimulation reveals sensitization in the multisensory vestibular cortical network},<br \/>\r\nauthor = {Renana Storm and Viktoria Wrobel and Antonia Frings and Andreas Sprenger and Christoph Helmchen},<br \/>\r\ndoi = {10.1038\/s41598-025-11529-2},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201311},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Persistent postural-perceptual dizziness (PPPD) is often preceded by vestibular disorders. We applied galvanic vestibular stimulation (GVS) and related stimulus-evoked activity to individual ratings of perceived motion for each stimulus and to perceived egomotion thresholds by GVS and behavioural parameters outside the scanner: levels of functional disability by standardized questionnaires, visual motion coherence, passive egomotion perception by chair rotation and quantitative postural stability. We hypothesized that the preceding vestibular disorder predisposes to abnormal brain excitability by vestibular stimulation. All participants showed normal vestibular function tests on quantitative testing. GVS with different intensities was applied to 28 patients and 28 age- and gender-matched healthy participants (HC) in the scanner. After each stimulus, participants rated their perceived level of egomotion. GVS perception threshold was significantly lower in PPPD patients. Contrasting stimulus-identical GVS against a sham stimulus, group comparison revealed a stronger activation in the patient's supramarginal gyrus, insular cortex (operculum 3), and vermis. This stronger excitability was not related to the individual threshold of perceived egomotion by GVS. Patients rated GVS-evoked egomotion intensity by identical GVS intensities larger than HC but neural activity did not correlate with individual ratings of perceived egomotion by GVS. As GVS evoked larger egomotion and larger brain activation in patients, the ratio of brain activity to egomotion perception was not different between groups. GVS-evoked insular activity increased with the level of PPPD-related disability and postural imbalance. The larger activation in multisensory cortical vestibular network indicates a sensitization to vestibular stimuli eliciting egomotion perception which increases with levels of PPPD disability. It seems to reflect a sensory-neural amplification rather than an abnormal sensory-perceptual scaling.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11442','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11442\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Persistent postural-perceptual dizziness (PPPD) is often preceded by vestibular disorders. We applied galvanic vestibular stimulation (GVS) and related stimulus-evoked activity to individual ratings of perceived motion for each stimulus and to perceived egomotion thresholds by GVS and behavioural parameters outside the scanner: levels of functional disability by standardized questionnaires, visual motion coherence, passive egomotion perception by chair rotation and quantitative postural stability. We hypothesized that the preceding vestibular disorder predisposes to abnormal brain excitability by vestibular stimulation. All participants showed normal vestibular function tests on quantitative testing. GVS with different intensities was applied to 28 patients and 28 age- and gender-matched healthy participants (HC) in the scanner. After each stimulus, participants rated their perceived level of egomotion. GVS perception threshold was significantly lower in PPPD patients. Contrasting stimulus-identical GVS against a sham stimulus, group comparison revealed a stronger activation in the patient's supramarginal gyrus, insular cortex (operculum 3), and vermis. This stronger excitability was not related to the individual threshold of perceived egomotion by GVS. Patients rated GVS-evoked egomotion intensity by identical GVS intensities larger than HC but neural activity did not correlate with individual ratings of perceived egomotion by GVS. As GVS evoked larger egomotion and larger brain activation in patients, the ratio of brain activity to egomotion perception was not different between groups. GVS-evoked insular activity increased with the level of PPPD-related disability and postural imbalance. The larger activation in multisensory cortical vestibular network indicates a sensitization to vestibular stimuli eliciting egomotion perception which increases with levels of PPPD disability. It seems to reflect a sensory-neural amplification rather than an abnormal sensory-perceptual scaling.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11442','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11442\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-11529-2\" title=\"Follow DOI:10.1038\/s41598-025-11529-2\" target=\"_blank\">doi:10.1038\/s41598-025-11529-2<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11442','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Caleb Stone; Jason B. Mattingley; Dragan Rangelov<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11432','tp_abstract')\" style=\"cursor:pointer;\">Neural mechanisms of metacognitive improvement under speed pressure<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Communications Biology, <\/span><span class=\"tp_pub_additional_volume\">vol. 8, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201312, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11432\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11432','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11432\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11432','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11432\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11432','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11432\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Stone2025,<br \/>\r\ntitle = {Neural mechanisms of metacognitive improvement under speed pressure},<br \/>\r\nauthor = {Caleb Stone and Jason B. Mattingley and Dragan Rangelov},<br \/>\r\ndoi = {10.1038\/s42003-025-07646-3},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Communications Biology},<br \/>\r\nvolume = {8},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201312},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {The ability to accurately monitor the quality of one's choices, or metacognition, improves under speed pressure, possibly due to changes in post-decisional evidence processing. Here, we investigate the neural processes that regulate decision-making and metacognition under speed pressure using time-resolved analyses of brain activity recorded using electroencephalography. Participants performed a motion discrimination task under short and long response deadlines and provided a metacognitive rating following each response. Behaviourally, participants were faster, less accurate, and showed superior metacognition with short deadlines. These effects were accompanied by a larger centro-parietal positivity (CPP), a neural correlate of evidence accumulation. Crucially, post-decisional CPP amplitude was more strongly associated with participants' metacognitive ratings following errors under short relative to long response deadlines. Our results suggest that superior metacognition under speed pressure may stem from enhanced metacognitive readout of post-decisional evidence.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11432','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11432\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The ability to accurately monitor the quality of one's choices, or metacognition, improves under speed pressure, possibly due to changes in post-decisional evidence processing. Here, we investigate the neural processes that regulate decision-making and metacognition under speed pressure using time-resolved analyses of brain activity recorded using electroencephalography. Participants performed a motion discrimination task under short and long response deadlines and provided a metacognitive rating following each response. Behaviourally, participants were faster, less accurate, and showed superior metacognition with short deadlines. These effects were accompanied by a larger centro-parietal positivity (CPP), a neural correlate of evidence accumulation. Crucially, post-decisional CPP amplitude was more strongly associated with participants' metacognitive ratings following errors under short relative to long response deadlines. Our results suggest that superior metacognition under speed pressure may stem from enhanced metacognitive readout of post-decisional evidence.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11432','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11432\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s42003-025-07646-3\" title=\"Follow DOI:10.1038\/s42003-025-07646-3\" target=\"_blank\">doi:10.1038\/s42003-025-07646-3<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11432','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Ramanujan Srinath; Amy M. Ni; Claire Marucci; Marlene R. Cohen; David H. Brainard<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11297','tp_abstract')\" style=\"cursor:pointer;\">Orthogonal neural representations support perceptual judgments of natural stimuli<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201317, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11297\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11297','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11297\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11297','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11297\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11297','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11297\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Srinath2025a,<br \/>\r\ntitle = {Orthogonal neural representations support perceptual judgments of natural stimuli},<br \/>\r\nauthor = {Ramanujan Srinath and Amy M. Ni and Claire Marucci and Marlene R. Cohen and David H. Brainard},<br \/>\r\ndoi = {10.1038\/s41598-025-88910-8},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201317},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {In natural visually guided behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on blank backgrounds. Natural images, however, contain task-irrelevant background elements that might interfere with the perception of object features. Recent studies suggest that visual feature estimation can be modeled through the linear decoding of task-relevant information from visual cortex. So, if the representations of task-relevant and irrelevant features are not orthogonal in the neural population, then variation in the task-irrelevant features would impair task performance. We tested this hypothesis using human psychophysics and monkey neurophysiology combined with parametrically variable naturalistic stimuli. We demonstrate that (1) the neural representation of one feature (the position of an object) in visual area V4 is orthogonal to those of several background features, (2) the ability of human observers to precisely judge object position was largely unaffected by those background features, and (3) many features of the object and the background (and of objects from a separate stimulus set) are orthogonally represented in V4 neural population responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of object features despite the richness of natural visual scenes.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11297','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11297\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In natural visually guided behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on blank backgrounds. Natural images, however, contain task-irrelevant background elements that might interfere with the perception of object features. Recent studies suggest that visual feature estimation can be modeled through the linear decoding of task-relevant information from visual cortex. So, if the representations of task-relevant and irrelevant features are not orthogonal in the neural population, then variation in the task-irrelevant features would impair task performance. We tested this hypothesis using human psychophysics and monkey neurophysiology combined with parametrically variable naturalistic stimuli. We demonstrate that (1) the neural representation of one feature (the position of an object) in visual area V4 is orthogonal to those of several background features, (2) the ability of human observers to precisely judge object position was largely unaffected by those background features, and (3) many features of the object and the background (and of objects from a separate stimulus set) are orthogonally represented in V4 neural population responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of object features despite the richness of natural visual scenes.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11297','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11297\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-88910-8\" title=\"Follow DOI:10.1038\/s41598-025-88910-8\" target=\"_blank\">doi:10.1038\/s41598-025-88910-8<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11297','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Qiao Songlin; Xuemei Xia; Jing Chen; Matteo Valsecchi<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11220','tp_abstract')\" style=\"cursor:pointer;\">Attentional tracking reduces cortical alpha oscillations<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201314, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11220\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11220','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11220\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11220','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11220\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11220','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11220\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Songlin2025,<br \/>\r\ntitle = {Attentional tracking reduces cortical alpha oscillations},<br \/>\r\nauthor = {Qiao Songlin and Xuemei Xia and Jing Chen and Matteo Valsecchi},<br \/>\r\ndoi = {10.1038\/s41598-025-14585-w},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201314},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {The premotor theory of attention suggests that both overt and covert attentional orienting are governed by similar mechanisms and neural structures, a concept extensively investigated in paradigms involving shifts in attention and gaze towards peripheral targets. Previous studies have found a strong link between cortical alpha oscillations and overt smooth pursuit of a target. However, the relationship between alpha oscillations and covert tracking of peripheral moving stimuli remains unclear. To address this, we asked 16 observers to maintain fixation while covertly attending to a visual stimulus moving along the horizontal meridian at varying speeds (2, 6, or 12 \u00b0\/s), within either the left or right hemifield. We simultaneously recorded both eye movements and EEG data. Our results revealed that alpha power was significantly reduced when observers tracked a target that moved further in the periphery, independent of its speed. These findings confirm that the distribution of alpha power is sensitive to the allocation of covert attention during tracking. This suggests a tight link between the attentional processes involved in covert tracking and overt pursuit of a moving target, supporting the premotor theory of attention.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11220','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11220\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The premotor theory of attention suggests that both overt and covert attentional orienting are governed by similar mechanisms and neural structures, a concept extensively investigated in paradigms involving shifts in attention and gaze towards peripheral targets. Previous studies have found a strong link between cortical alpha oscillations and overt smooth pursuit of a target. However, the relationship between alpha oscillations and covert tracking of peripheral moving stimuli remains unclear. To address this, we asked 16 observers to maintain fixation while covertly attending to a visual stimulus moving along the horizontal meridian at varying speeds (2, 6, or 12 \u00b0\/s), within either the left or right hemifield. We simultaneously recorded both eye movements and EEG data. Our results revealed that alpha power was significantly reduced when observers tracked a target that moved further in the periphery, independent of its speed. These findings confirm that the distribution of alpha power is sensitive to the allocation of covert attention during tracking. This suggests a tight link between the attentional processes involved in covert tracking and overt pursuit of a moving target, supporting the premotor theory of attention.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11220','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11220\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-14585-w\" title=\"Follow DOI:10.1038\/s41598-025-14585-w\" target=\"_blank\">doi:10.1038\/s41598-025-14585-w<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11220','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Sabyasachi Shivkumar; Gregory C. DeAngelis; Ralf M. Haefner<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10989','tp_abstract')\" style=\"cursor:pointer;\">Hierarchical motion perception as causal inference<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201314, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10989\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10989','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10989\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10989','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10989\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10989','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10989\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Shivkumar2025,<br \/>\r\ntitle = {Hierarchical motion perception as causal inference},<br \/>\r\nauthor = {Sabyasachi Shivkumar and Gregory C. DeAngelis and Ralf M. Haefner},<br \/>\r\ndoi = {10.1038\/s41467-025-58797-0},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201314},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Motion can only be defined relative to a reference frame; yet it remains unclear which reference frame guides perception. A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, progressively inferring structured reference frames and \u201cperceives\" motion in the appropriate reference frame. Critical model predictions are supported by two experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10989','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10989\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Motion can only be defined relative to a reference frame; yet it remains unclear which reference frame guides perception. A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, progressively inferring structured reference frames and \u201cperceives\" motion in the appropriate reference frame. Critical model predictions are supported by two experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10989','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10989\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-58797-0\" title=\"Follow DOI:10.1038\/s41467-025-58797-0\" target=\"_blank\">doi:10.1038\/s41467-025-58797-0<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10989','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Cal M. Shearer; Annalise B. Rawson; Helen C. Barron; Jill X. O'Reilly<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10886','tp_abstract')\" style=\"cursor:pointer;\">Memory reactivation during rest forms shortcuts in a cognitive map<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10886\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10886','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10886\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10886','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10886\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10886','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10886\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Shearer2025,<br \/>\r\ntitle = {Memory reactivation during rest forms shortcuts in a cognitive map},<br \/>\r\nauthor = {Cal M. Shearer and Annalise B. Rawson and Helen C. Barron and Jill X. O'Reilly},<br \/>\r\ndoi = {10.1038\/s41598-025-06742-y},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201316},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Efficient and flexible cognition relies upon cognitive maps\u2014representations of concepts and the relations between them. Cognitive maps integrate relations that were learned separately into a cohesive whole. Memory reactivation during rest and sleep may contribute to cognitive map formation in two ways: by simply strengthening memories for directly experienced relations, or by reorganising concepts and creating new relations that capture the underlying structure. We designed a multi-stage learning task to test whether reactivation during rest is involved in restructuring memories as opposed to simply consolidating what was experienced. We causally manipulated memory reactivation during rest using awake, contextual targeted memory reactivation. We found that promoting memory reactivation during rest qualitatively reorganises the cognitive map by forming \u2018shortcuts' between events which have not been experienced together. These shortcuts in memory extend beyond direct experience to facilitate our ability to make novel inferences. Using a series of control tests we show that inference performance cannot be explained by quantitative strengthening of the experienced component links. Interestingly, we show that representing a shortcut may come with limitations, as shortcuts cannot be readily updated in response to rapid changes in the environment. Together, these findings reveal how memories are reorganised during awake rest to construct a cognitive map of our environment, while highlighting the constraints set by a trade-off between efficient and flexible behaviour.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10886','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10886\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Efficient and flexible cognition relies upon cognitive maps\u2014representations of concepts and the relations between them. Cognitive maps integrate relations that were learned separately into a cohesive whole. Memory reactivation during rest and sleep may contribute to cognitive map formation in two ways: by simply strengthening memories for directly experienced relations, or by reorganising concepts and creating new relations that capture the underlying structure. We designed a multi-stage learning task to test whether reactivation during rest is involved in restructuring memories as opposed to simply consolidating what was experienced. We causally manipulated memory reactivation during rest using awake, contextual targeted memory reactivation. We found that promoting memory reactivation during rest qualitatively reorganises the cognitive map by forming \u2018shortcuts' between events which have not been experienced together. These shortcuts in memory extend beyond direct experience to facilitate our ability to make novel inferences. Using a series of control tests we show that inference performance cannot be explained by quantitative strengthening of the experienced component links. Interestingly, we show that representing a shortcut may come with limitations, as shortcuts cannot be readily updated in response to rapid changes in the environment. Together, these findings reveal how memories are reorganised during awake rest to construct a cognitive map of our environment, while highlighting the constraints set by a trade-off between efficient and flexible behaviour.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10886','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10886\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-06742-y\" title=\"Follow DOI:10.1038\/s41598-025-06742-y\" target=\"_blank\">doi:10.1038\/s41598-025-06742-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10886','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Dixit Sharma; Bart Krekelberg<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10865','tp_abstract')\" style=\"cursor:pointer;\">Predicting spiking activity from scalp EEG<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Neural Engineering, <\/span><span class=\"tp_pub_additional_volume\">vol. 22, <\/span><span class=\"tp_pub_additional_number\">no. 6, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10865\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10865','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10865\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10865','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10865\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10865','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10865\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Sharma2025,<br \/>\r\ntitle = {Predicting spiking activity from scalp EEG},<br \/>\r\nauthor = {Dixit Sharma and Bart Krekelberg},<br \/>\r\ndoi = {10.1088\/1741-2552\/ae2541},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Journal of Neural Engineering},<br \/>\r\nvolume = {22},<br \/>\r\nnumber = {6},<br \/>\r\npages = {1\u201316},<br \/>\r\nabstract = {Objective. Despite decades of electroencephalography (EEG) research, the relationship between EEG and underlying spiking dynamics remains unclear. This limits our ability to infer neural dynamics reflected in intracranial signals from EEG, a critical step to bridge electrophysiological findings across species and to develop non-invasive brain\u2013machine interfaces (BMIs). In this study, we aimed to estimate spiking activity in the visual cortex using non-invasive scalp EEG. Approach . We recorded spiking activity from a 32-channel floating microarray permanently implanted in parafoveal V1 and scalp-EEG in a male macaque monkey. While the animal fixated, the screen flickered at different temporal frequencies to induce steady-state visual evoked potentials. We analyzed the relationship between the V1 multi-unit spiking activity envelope (MUAe) and EEG frequency bands to predict MUAe at each time point from EEG. We extracted instantaneous spectrotemporal features of the EEG signal, including phase, amplitude, and phase-amplitude coupling of its frequency bands. Main results . Although the relationship between these spectrotemporal features and the V1 MUAe was complex and frequency-dependent, they were reliably predictive of the MUAe. Specifically, in a linear regression predicting MUAe from EEG, each EEG feature (phase, amplitude, coupling) contributed to model predictions. In addition, we found that MUAe predictions were better in shallow than deep cortical layers, and that the phase of stimulus frequency further improved MUAe predictions. Significance. Our study shows that a comprehensive account of spectrotemporal features of non-invasive EEG provides information on underlying spiking activity beyond what is available when only the amplitude or phase of the EEG signal is considered. This demonstrates the richness of the EEG signal and its complex relationship with neural spiking activity and suggests that using more comprehensive spectrotemporal signatures could improve BMI applications.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10865','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10865\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Objective. Despite decades of electroencephalography (EEG) research, the relationship between EEG and underlying spiking dynamics remains unclear. This limits our ability to infer neural dynamics reflected in intracranial signals from EEG, a critical step to bridge electrophysiological findings across species and to develop non-invasive brain\u2013machine interfaces (BMIs). In this study, we aimed to estimate spiking activity in the visual cortex using non-invasive scalp EEG. Approach . We recorded spiking activity from a 32-channel floating microarray permanently implanted in parafoveal V1 and scalp-EEG in a male macaque monkey. While the animal fixated, the screen flickered at different temporal frequencies to induce steady-state visual evoked potentials. We analyzed the relationship between the V1 multi-unit spiking activity envelope (MUAe) and EEG frequency bands to predict MUAe at each time point from EEG. We extracted instantaneous spectrotemporal features of the EEG signal, including phase, amplitude, and phase-amplitude coupling of its frequency bands. Main results . Although the relationship between these spectrotemporal features and the V1 MUAe was complex and frequency-dependent, they were reliably predictive of the MUAe. Specifically, in a linear regression predicting MUAe from EEG, each EEG feature (phase, amplitude, coupling) contributed to model predictions. In addition, we found that MUAe predictions were better in shallow than deep cortical layers, and that the phase of stimulus frequency further improved MUAe predictions. Significance. Our study shows that a comprehensive account of spectrotemporal features of non-invasive EEG provides information on underlying spiking activity beyond what is available when only the amplitude or phase of the EEG signal is considered. This demonstrates the richness of the EEG signal and its complex relationship with neural spiking activity and suggests that using more comprehensive spectrotemporal signatures could improve BMI applications.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10865','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10865\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1088\/1741-2552\/ae2541\" title=\"Follow DOI:10.1088\/1741-2552\/ae2541\" target=\"_blank\">doi:10.1088\/1741-2552\/ae2541<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10865','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Samuel Shaki; Oria Pitem; Martin H. Fischer<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10845','tp_abstract')\" style=\"cursor:pointer;\">Lexical priming of space depends on how deeply you think about it<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10845\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10845','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10845\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10845','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10845\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10845','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10845\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Shaki2025,<br \/>\r\ntitle = {Lexical priming of space depends on how deeply you think about it},<br \/>\r\nauthor = {Samuel Shaki and Oria Pitem and Martin H. Fischer},<br \/>\r\ndoi = {10.1038\/s41598-025-22265-y},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {There is a long debate about how the meaning of words cues our spatial attention. For implicitly spatial words such as \u201cROOF\u201d or \u201cBASEMENT\u201d, it was recently shown that processing both the cue word and a subsequent spatial target stimulus was necessary for spatial congruity effects to emerge. Here we challenge this work by documenting that word cues alone suffice to induce congruity effects if they are processed deeply. Sixty-three healthy adults detected vertically displaced targets after looking at centrally presented cue words under three counterbalanced instructions, imposing increasing processing depth: Lexical decision, non-spatial categorization, and spatial categorization. Target detection speed revealed spatial congruity effects for both spatial and non-spatial categorization but not for lexical decision. An interpretation in terms of covert attention deployment was corroborated by concomitant vertical displacements of eye gaze. Our results reveal minimal requirements for covert and overt semantic cueing of spatial attention.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10845','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10845\" style=\"display:none;\"><div class=\"tp_abstract_entry\">There is a long debate about how the meaning of words cues our spatial attention. For implicitly spatial words such as \u201cROOF\u201d or \u201cBASEMENT\u201d, it was recently shown that processing both the cue word and a subsequent spatial target stimulus was necessary for spatial congruity effects to emerge. Here we challenge this work by documenting that word cues alone suffice to induce congruity effects if they are processed deeply. Sixty-three healthy adults detected vertically displaced targets after looking at centrally presented cue words under three counterbalanced instructions, imposing increasing processing depth: Lexical decision, non-spatial categorization, and spatial categorization. Target detection speed revealed spatial congruity effects for both spatial and non-spatial categorization but not for lexical decision. An interpretation in terms of covert attention deployment was corroborated by concomitant vertical displacements of eye gaze. Our results reveal minimal requirements for covert and overt semantic cueing of spatial attention.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10845','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10845\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-22265-y\" title=\"Follow DOI:10.1038\/s41598-025-22265-y\" target=\"_blank\">doi:10.1038\/s41598-025-22265-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10845','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Fatemeh Shahnabati; Atefeh Sabourifard; S. Hamid Amiri; Alireza Bosaghzadeh; Reza Ebrahimpour<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10835','tp_abstract')\" style=\"cursor:pointer;\">Cognitive load and visual attention assessment using physiological eye tracking measures in multimedia learning<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">PLoS One, <\/span><span class=\"tp_pub_additional_volume\">vol. 20, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201326, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10835\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10835','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10835\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10835','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10835\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10835','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10835\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Shahnabati2025,<br \/>\r\ntitle = {Cognitive load and visual attention assessment using physiological eye tracking measures in multimedia learning},<br \/>\r\nauthor = {Fatemeh Shahnabati and Atefeh Sabourifard and S. Hamid Amiri and Alireza Bosaghzadeh and Reza Ebrahimpour},<br \/>\r\ndoi = {10.1371\/journal.pone.0337195},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {PLoS One},<br \/>\r\nvolume = {20},<br \/>\r\npages = {1\u201326},<br \/>\r\npublisher = {Public Library of Science},<br \/>\r\nabstract = {Effective multimedia content design can boost performance, capture visual attention, and optimize cognitive load. The current study employs eye-tracking technology to establish metrics to measure cognitive load, analyze visual attention allocation, and evaluate learners' performance in English language learning. The study focuses on creating and comparing two different multimedia presentations. The differentiation between them lies in their adherence to or deviation from Mayer's educational multimedia design principles: coherence, signaling, and spatial contiguity. participants were randomly assigned to two groups. The first group viewed with principles version, while the second group viewed without principles version, during which their eye movement data were collected. Subsequently, both groups participated in a recall test and completed the NASA-TLX questionnaire. The research establishes connections between specific eye-tracking parameters, subjective cognitive load scores, and recall test results through regression models and analyzes fixation distributions. The study also delves into microsaccades rate and changes in pupil size, each analyzed within times of interest. The study's findings indicate that the examined metrics can significantly help distinguish between the two conditions: principles and no principles. These metrics are pertinent for assessing individuals' cognitive load and visual attention and serve as beneficial indicators for gauging the efficacy of the designed multimedia content.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10835','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10835\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Effective multimedia content design can boost performance, capture visual attention, and optimize cognitive load. The current study employs eye-tracking technology to establish metrics to measure cognitive load, analyze visual attention allocation, and evaluate learners' performance in English language learning. The study focuses on creating and comparing two different multimedia presentations. The differentiation between them lies in their adherence to or deviation from Mayer's educational multimedia design principles: coherence, signaling, and spatial contiguity. participants were randomly assigned to two groups. The first group viewed with principles version, while the second group viewed without principles version, during which their eye movement data were collected. Subsequently, both groups participated in a recall test and completed the NASA-TLX questionnaire. The research establishes connections between specific eye-tracking parameters, subjective cognitive load scores, and recall test results through regression models and analyzes fixation distributions. The study also delves into microsaccades rate and changes in pupil size, each analyzed within times of interest. The study's findings indicate that the examined metrics can significantly help distinguish between the two conditions: principles and no principles. These metrics are pertinent for assessing individuals' cognitive load and visual attention and serve as beneficial indicators for gauging the efficacy of the designed multimedia content.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10835','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10835\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1371\/journal.pone.0337195\" title=\"Follow DOI:10.1371\/journal.pone.0337195\" target=\"_blank\">doi:10.1371\/journal.pone.0337195<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10835','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Yelda Semizer; Ruth Rosenholtz<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10799','tp_abstract')\" style=\"cursor:pointer;\">The effect of background clutter on visual search in video conferencing<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognitive Research: Principles and Implications, <\/span><span class=\"tp_pub_additional_volume\">vol. 10, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10799\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10799','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10799\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10799','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10799\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10799','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10799\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Semizer2025,<br \/>\r\ntitle = {The effect of background clutter on visual search in video conferencing},<br \/>\r\nauthor = {Yelda Semizer and Ruth Rosenholtz},<br \/>\r\ndoi = {10.1186\/s41235-025-00643-4},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Cognitive Research: Principles and Implications},<br \/>\r\nvolume = {10},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201316},<br \/>\r\npublisher = {Springer Science and Business Media Deutschland GmbH},<br \/>\r\nabstract = {The use of video conferencing tools has become increasingly common recently. The visual displays in these tools are highly complex, being composed of multiple faces with varying image quality and lighting conditions. On top of this, users have the ability to choose their own backgrounds. Some choose simple artificial backgrounds, some appear in front of a real or simulated room, and some use something more abstract. How do these choices affect the user's ability to use the tool, for example, finding the current speaker or a reaction symbol? Vision science can certainly provide answers to these questions; however, most search studies use simple displays with a uniform background, or more recently, real-world scenes. How does what we know about search generalize to these more complex displays? The current study sought to examine how our understanding of visual search applies to well-controlled video conferencing displays. Specifically, we investigated the effect of display clutter (i.e., background complexity and variability) on perceptual tasks relevant for video conferencing. In an eye-tracking set-up, participants searched either for the speaker whose image was highlighted (Experiment 1) or for a reaction symbol (raised-hand) embedded on one of the attendees' background. Results showed a significant effect of background complexity and variability, suggesting that search performance declined as the display clutter increased. Image-based analysis showed that the choice of backgrounds mediated these effects, suggesting that some virtual backgrounds were not optimal for perceptual processes.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10799','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10799\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The use of video conferencing tools has become increasingly common recently. The visual displays in these tools are highly complex, being composed of multiple faces with varying image quality and lighting conditions. On top of this, users have the ability to choose their own backgrounds. Some choose simple artificial backgrounds, some appear in front of a real or simulated room, and some use something more abstract. How do these choices affect the user's ability to use the tool, for example, finding the current speaker or a reaction symbol? Vision science can certainly provide answers to these questions; however, most search studies use simple displays with a uniform background, or more recently, real-world scenes. How does what we know about search generalize to these more complex displays? The current study sought to examine how our understanding of visual search applies to well-controlled video conferencing displays. Specifically, we investigated the effect of display clutter (i.e., background complexity and variability) on perceptual tasks relevant for video conferencing. In an eye-tracking set-up, participants searched either for the speaker whose image was highlighted (Experiment 1) or for a reaction symbol (raised-hand) embedded on one of the attendees' background. Results showed a significant effect of background complexity and variability, suggesting that search performance declined as the display clutter increased. Image-based analysis showed that the choice of backgrounds mediated these effects, suggesting that some virtual backgrounds were not optimal for perceptual processes.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10799','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10799\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1186\/s41235-025-00643-4\" title=\"Follow DOI:10.1186\/s41235-025-00643-4\" target=\"_blank\">doi:10.1186\/s41235-025-00643-4<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10799','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Alia Seedat; Alex Lepauvre; Jay Jeschke; Urszula Gorska-Klimowska; Marcelo Armendariz; Katarina Bendtz; Simon Henin; Rony Hirschhorn; Tanya Brown; Erika Jensen; Csaba Kozma; David Mazumder; Stephanie Montenegro; Leyao Yu; Niccol\u00f2 Bonacchi; Diptyajit Das; Kyle Kahraman; Praveen Sripad; Fatemeh Taheriyan; Orrin Devinsky; Patricia Dugan; Werner Doyle; Adeen Flinker; Daniel Friedman; Wendell Lake; Michael Pitts; Liad Mudrik; Melanie Boly; Sasha Devore; Gabriel Kreiman; Lucia Melloni<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10762','tp_abstract')\" style=\"cursor:pointer;\">Open multi-center intracranial electroencephalography dataset with task probing conscious visual perception<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Data, <\/span><span class=\"tp_pub_additional_volume\">vol. 12, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201314, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10762\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10762','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10762\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10762','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10762\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10762','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10762\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Seedat2025,<br \/>\r\ntitle = {Open multi-center intracranial electroencephalography dataset with task probing conscious visual perception},<br \/>\r\nauthor = {Alia Seedat and Alex Lepauvre and Jay Jeschke and Urszula Gorska-Klimowska and Marcelo Armendariz and Katarina Bendtz and Simon Henin and Rony Hirschhorn and Tanya Brown and Erika Jensen and Csaba Kozma and David Mazumder and Stephanie Montenegro and Leyao Yu and Niccol\u00f2 Bonacchi and Diptyajit Das and Kyle Kahraman and Praveen Sripad and Fatemeh Taheriyan and Orrin Devinsky and Patricia Dugan and Werner Doyle and Adeen Flinker and Daniel Friedman and Wendell Lake and Michael Pitts and Liad Mudrik and Melanie Boly and Sasha Devore and Gabriel Kreiman and Lucia Melloni},<br \/>\r\ndoi = {10.1038\/s41597-025-04833-z},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Data},<br \/>\r\nvolume = {12},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201314},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {We introduce an intracranial EEG (iEEG) dataset collected as part of an adversarial collaboration between proponents of two theories of consciousness: Global Neuronal Workspace Theory and Integrated Information Theory. The data were recorded from 38 patients undergoing intracranial monitoring of epileptic seizures across three research centers using the same experimental protocol. Participants were presented with suprathreshold visual stimuli belonging to four different categories (faces, objects, letters, false fonts) in three orientations (front, left, right view), and for three durations (0.5, 1.0, 1.5 s). Participants engaged in a non-speeded Go\/No-Go target detection task to identify infrequent targets with some stimuli becoming task-relevant and others task-irrelevant. Participants also engaged in a motor localizer task. The data were checked for its quality and converted to Brain Imaging Data Structure (BIDS). The de-identified dataset contains demographics, clinical information, electrode reconstruction, behavioral performance, and eye-tracking data. We also provide code to preprocess and analyze the data. This dataset holds promise for reuse in consciousness science and vision neuroscience to answer questions related to stimulus processing, target detection, and task-relevance, among many others.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10762','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10762\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We introduce an intracranial EEG (iEEG) dataset collected as part of an adversarial collaboration between proponents of two theories of consciousness: Global Neuronal Workspace Theory and Integrated Information Theory. The data were recorded from 38 patients undergoing intracranial monitoring of epileptic seizures across three research centers using the same experimental protocol. Participants were presented with suprathreshold visual stimuli belonging to four different categories (faces, objects, letters, false fonts) in three orientations (front, left, right view), and for three durations (0.5, 1.0, 1.5 s). Participants engaged in a non-speeded Go\/No-Go target detection task to identify infrequent targets with some stimuli becoming task-relevant and others task-irrelevant. Participants also engaged in a motor localizer task. The data were checked for its quality and converted to Brain Imaging Data Structure (BIDS). The de-identified dataset contains demographics, clinical information, electrode reconstruction, behavioral performance, and eye-tracking data. We also provide code to preprocess and analyze the data. This dataset holds promise for reuse in consciousness science and vision neuroscience to answer questions related to stimulus processing, target detection, and task-relevance, among many others.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10762','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10762\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41597-025-04833-z\" title=\"Follow DOI:10.1038\/s41597-025-04833-z\" target=\"_blank\">doi:10.1038\/s41597-025-04833-z<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10762','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Lara Stella Marie Schroth; Wim Fias; Muhammet Ikbal Sahan<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10671','tp_abstract')\" style=\"cursor:pointer;\">Eye movements follow the dynamic shifts of attention through serial order in verbal working memory<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201311, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10671\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10671','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10671\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10671','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10671\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10671','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10671\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Schroth2025,<br \/>\r\ntitle = {Eye movements follow the dynamic shifts of attention through serial order in verbal working memory},<br \/>\r\nauthor = {Lara Stella Marie Schroth and Wim Fias and Muhammet Ikbal Sahan},<br \/>\r\ndoi = {10.1038\/s41598-024-85015-6},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201311},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {How are arbitrary sequences of verbal information retained and manipulated in working memory? Increasing evidence suggests that serial order in verbal WM is spatially coded and that spatial attention is involved in access and retrieval. Based on the idea that brain areas controlling spatial attention are also involved in oculomotor control, we used eye tracking to reveal how the spatial structure of serial order information is accessed in verbal working memory. In two experiments, participants memorized a sequence of auditory words in the correct order. While their eye movements were being measured, they named the memorized items in a self-determined order in Experiment 1 and in a cued order in Experiment 2. We tested the hypothesis that serial order in verbal working memory interacts with the spatial attention system whereby gaze patterns in visual space closely follow attentional shifts in the internal space of working memory. In both experiments, we found that the gaze shifts in visual space correlated with the spatial shifts of attention along the left-to-right one-dimensional mapping of serial order positions in verbal WM. These findings suggest that spatial attention is employed for dynamically searching through verbal WM and that eye movements reflect the spontaneous association of order and space even in the absence of visuospatial input.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10671','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10671\" style=\"display:none;\"><div class=\"tp_abstract_entry\">How are arbitrary sequences of verbal information retained and manipulated in working memory? Increasing evidence suggests that serial order in verbal WM is spatially coded and that spatial attention is involved in access and retrieval. Based on the idea that brain areas controlling spatial attention are also involved in oculomotor control, we used eye tracking to reveal how the spatial structure of serial order information is accessed in verbal working memory. In two experiments, participants memorized a sequence of auditory words in the correct order. While their eye movements were being measured, they named the memorized items in a self-determined order in Experiment 1 and in a cued order in Experiment 2. We tested the hypothesis that serial order in verbal working memory interacts with the spatial attention system whereby gaze patterns in visual space closely follow attentional shifts in the internal space of working memory. In both experiments, we found that the gaze shifts in visual space correlated with the spatial shifts of attention along the left-to-right one-dimensional mapping of serial order positions in verbal WM. These findings suggest that spatial attention is employed for dynamically searching through verbal WM and that eye movements reflect the spontaneous association of order and space even in the absence of visuospatial input.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10671','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10671\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-024-85015-6\" title=\"Follow DOI:10.1038\/s41598-024-85015-6\" target=\"_blank\">doi:10.1038\/s41598-024-85015-6<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10671','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Eda Sar\u0131; Furkan Dindaro\u011flu; Belk\u0131s Durmu\u015f; Sonia Amado<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10460','tp_abstract')\" style=\"cursor:pointer;\">Exploring mandibular asymmetry: Insights from visual perception using eye-tracking technology<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">BMC Oral Health, <\/span><span class=\"tp_pub_additional_volume\">vol. 25, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201310, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10460\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10460','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10460\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10460','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10460\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10460','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10460\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Sar2025,<br \/>\r\ntitle = {Exploring mandibular asymmetry: Insights from visual perception using eye-tracking technology},<br \/>\r\nauthor = {Eda Sar\u0131 and Furkan Dindaro\u011flu and Belk\u0131s Durmu\u015f and Sonia Amado},<br \/>\r\ndoi = {10.1186\/s12903-025-06747-z},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {BMC Oral Health},<br \/>\r\nvolume = {25},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201310},<br \/>\r\npublisher = {BioMed Central Ltd},<br \/>\r\nabstract = {Background: The visual attention provides an objective perspective on how a stimulus take attention. In dentistry, one of the important facial determinants in esthetic perception is the mandibular asymmetry. The study aimed to evaluate the eye movements of the orthodontists and non-professionals on the images with different severity of mandibular asymmetry using eye tracking technology. Methods: The eye movements of 26 orthodontists and 30 non-professionals were captured. Thirty images were visually evaluated for the presence of mandibular asymmetry by two orthodontists. 2 mm, 4 mm, 6 mm, and 8 mm chin deviation were simulated on the images and the images without asymmetry were considered as control group. A total of 50 photographs from 10 individuals were included in the study. Participants' eye movements were recorded using an Eyelink 1000 plus eye-tracking device (Sr-Research, Canada). Repeated Measures Analysis of Variance (ANOVA) was used for statistical comparisons. Results: The number of fixations on the lower lip-chin area in either the right or left direction did not show a statistically significant difference. (F(1,000;59,000) = 2.133, p &gt; 0.05,). Time to first fixation was faster to the lower lip-chin area in 8 mm asymmetry condition compared to 2 mm (F(1,2) = 31.423, p &lt; 0.05, \u03b7p2 = 0.940). Orthodontists made less fixations before the lower lip-chin area in 8 mm condition compared to 2 mm (F(1,2) = 20.758, p &lt; 0.05, \u03b7p2 = 0.912). Conclusions: While the direction of mandibular asymmetry did not affect voluntary attention, an increase in asymmetry, regardless of profession, attracted more attention to the lower lip-chin area. While the 8 mm asymmetry caught the involuntary attention of orthodontists, the same did not occur in non-professionals.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10460','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10460\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Background: The visual attention provides an objective perspective on how a stimulus take attention. In dentistry, one of the important facial determinants in esthetic perception is the mandibular asymmetry. The study aimed to evaluate the eye movements of the orthodontists and non-professionals on the images with different severity of mandibular asymmetry using eye tracking technology. Methods: The eye movements of 26 orthodontists and 30 non-professionals were captured. Thirty images were visually evaluated for the presence of mandibular asymmetry by two orthodontists. 2 mm, 4 mm, 6 mm, and 8 mm chin deviation were simulated on the images and the images without asymmetry were considered as control group. A total of 50 photographs from 10 individuals were included in the study. Participants' eye movements were recorded using an Eyelink 1000 plus eye-tracking device (Sr-Research, Canada). Repeated Measures Analysis of Variance (ANOVA) was used for statistical comparisons. Results: The number of fixations on the lower lip-chin area in either the right or left direction did not show a statistically significant difference. (F(1,000;59,000) = 2.133, p &gt; 0.05,). Time to first fixation was faster to the lower lip-chin area in 8 mm asymmetry condition compared to 2 mm (F(1,2) = 31.423, p &lt; 0.05, \u03b7p2 = 0.940). Orthodontists made less fixations before the lower lip-chin area in 8 mm condition compared to 2 mm (F(1,2) = 20.758, p &lt; 0.05, \u03b7p2 = 0.912). Conclusions: While the direction of mandibular asymmetry did not affect voluntary attention, an increase in asymmetry, regardless of profession, attracted more attention to the lower lip-chin area. While the 8 mm asymmetry caught the involuntary attention of orthodontists, the same did not occur in non-professionals.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10460','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10460\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1186\/s12903-025-06747-z\" title=\"Follow DOI:10.1186\/s12903-025-06747-z\" target=\"_blank\">doi:10.1186\/s12903-025-06747-z<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10460','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Anthony W. Sali; Madison P. Shaver; Anna B. Toledo; Austin L. Torain; Isabel N. Flicker<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10398','tp_abstract')\" style=\"cursor:pointer;\">Learned saccade readiness varies with fluctuations in sustained attention<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201315, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10398\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10398','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10398\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10398','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10398\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10398','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10398\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Sali2025,<br \/>\r\ntitle = {Learned saccade readiness varies with fluctuations in sustained attention},<br \/>\r\nauthor = {Anthony W. Sali and Madison P. Shaver and Anna B. Toledo and Austin L. Torain and Isabel N. Flicker},<br \/>\r\ndoi = {10.1038\/s41598-025-14340-1},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201315},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Both the focus of sustained attention and an individual's readiness to shift attention among spatial locations fluctuate over time. However, the interaction of these ongoing changes in attentional states remains unknown. In the current study, participants completed a modified gradual continuous performance task during which they monitored one of two lateralized streams of black and white images for the appearance of frequent target stimuli, withholding responses to foils. Periodically, a visual cue signaled participants to either maintain fixation at the current stream or to make a saccade to the opposing stream, and participants made a parity categorization for a digit appearing at the cued location. Trial-by-trial variation in pupil size, an indicator of arousal, accounted for both fluctuations in sustained attention and shift readiness but fluctuations in sustained attention were not associated with general modulations of shift readiness. Furthermore, we manipulated the frequency of gaze shift cues over time and observed that unexpected shift cues were most disruptive when participants lacked sustained focus, yielding a greater cost in saccade latencies than when the efficacy of sustained attention was high. Our results suggest that ongoing changes in sustained attention occur independently from gaze shifting readiness but carry consequences for learned saccade preparation.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10398','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10398\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Both the focus of sustained attention and an individual's readiness to shift attention among spatial locations fluctuate over time. However, the interaction of these ongoing changes in attentional states remains unknown. In the current study, participants completed a modified gradual continuous performance task during which they monitored one of two lateralized streams of black and white images for the appearance of frequent target stimuli, withholding responses to foils. Periodically, a visual cue signaled participants to either maintain fixation at the current stream or to make a saccade to the opposing stream, and participants made a parity categorization for a digit appearing at the cued location. Trial-by-trial variation in pupil size, an indicator of arousal, accounted for both fluctuations in sustained attention and shift readiness but fluctuations in sustained attention were not associated with general modulations of shift readiness. Furthermore, we manipulated the frequency of gaze shift cues over time and observed that unexpected shift cues were most disruptive when participants lacked sustained focus, yielding a greater cost in saccade latencies than when the efficacy of sustained attention was high. Our results suggest that ongoing changes in sustained attention occur independently from gaze shifting readiness but carry consequences for learned saccade preparation.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10398','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10398\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-14340-1\" title=\"Follow DOI:10.1038\/s41598-025-14340-1\" target=\"_blank\">doi:10.1038\/s41598-025-14340-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10398','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Cristina Rubino; Adam T. Harrison; Lara A. Boyd<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10295','tp_abstract')\" style=\"cursor:pointer;\">Oculomotor learning is evident during implicit motor sequence learning<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10295\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10295','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10295\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10295','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10295\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10295','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10295\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Rubino2025,<br \/>\r\ntitle = {Oculomotor learning is evident during implicit motor sequence learning},<br \/>\r\nauthor = {Cristina Rubino and Adam T. Harrison and Lara A. Boyd},<br \/>\r\ndoi = {10.1038\/s41598-025-93498-0},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Motor sequence learning involves both oculomotor and manual motor systems, yet the role of the oculomotor system in the learning and execution of skilled arm movements remains underexplored. In the current work, the influence of sequence learning on the oculomotor system was investigated by testing 20 healthy adults for 3\u00a0days as they practiced an implicit motor learning task, the serial targeting task (STT). The STT contained a repeated sequence, which was interleaved with random sequences. This task was practiced on a KINARM robot that tracked both saccades and reaches. A delayed, 24-h retention test assessed sequence-specific motor learning. Sequence-specific changes across practice and learning were observed for both saccades and reaches; this was demonstrated by faster saccade and arm motor reaction times for the repeated sequence compared to random sequences. Notably, change in the oculomotor system occurred earlier in practice as compared to the manual motor system. Reaches were executed more quickly when led by express saccades (rapid eye movements occurring within 90\u2013120\u00a0ms) compared to when they were preceded by regular latency (&gt; 120\u00a0ms) saccades early in practice. Our findings highlight distinct yet interconnected functions between oculomotor and manual motor systems associated with implicit motor sequence learning.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10295','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10295\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Motor sequence learning involves both oculomotor and manual motor systems, yet the role of the oculomotor system in the learning and execution of skilled arm movements remains underexplored. In the current work, the influence of sequence learning on the oculomotor system was investigated by testing 20 healthy adults for 3\u00a0days as they practiced an implicit motor learning task, the serial targeting task (STT). The STT contained a repeated sequence, which was interleaved with random sequences. This task was practiced on a KINARM robot that tracked both saccades and reaches. A delayed, 24-h retention test assessed sequence-specific motor learning. Sequence-specific changes across practice and learning were observed for both saccades and reaches; this was demonstrated by faster saccade and arm motor reaction times for the repeated sequence compared to random sequences. Notably, change in the oculomotor system occurred earlier in practice as compared to the manual motor system. Reaches were executed more quickly when led by express saccades (rapid eye movements occurring within 90\u2013120\u00a0ms) compared to when they were preceded by regular latency (&gt; 120\u00a0ms) saccades early in practice. Our findings highlight distinct yet interconnected functions between oculomotor and manual motor systems associated with implicit motor sequence learning.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10295','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10295\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-93498-0\" title=\"Follow DOI:10.1038\/s41598-025-93498-0\" target=\"_blank\">doi:10.1038\/s41598-025-93498-0<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10295','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Gonzalo Ruarte; Gaston Bujia; Dami\u00e1n Care; Matias Julian Ison; Juan Esteban Kamienkowski<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10288','tp_abstract')\" style=\"cursor:pointer;\">Integrating Bayesian and neural networks models for eye movement prediction in hybrid search<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201315, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10288\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10288','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10288\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10288','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10288\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10288','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10288\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Ruarte2025,<br \/>\r\ntitle = {Integrating Bayesian and neural networks models for eye movement prediction in hybrid search},<br \/>\r\nauthor = {Gonzalo Ruarte and Gaston Bujia and Dami\u00e1n Care and Matias Julian Ison and Juan Esteban Kamienkowski},<br \/>\r\ndoi = {10.1038\/s41598-025-00272-3},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201315},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Visual search is crucial in daily human interaction with the environment. Hybrid search extends this by requiring observers to find any item from a given set. Recently, a few models were proposed to simulate human eye movements in visual search tasks within natural scenes, but none were implemented for Hybrid search under similar conditions. We present an enhanced neural network Entropy Limit Minimization (nnELM) model, grounded in a Bayesian framework and signal detection theory, and the Hybrid Search Eye Movements (HSEM) Dataset, containing thousands of human eye movements during hybrid tasks. A key Hybrid search challenge is that participants have to look for different objects at the same time. To address this, we developed several strategies involving the posterior probability distributions after each fixation. Adjusting peripheral visibility improved early-stage efficiency, aligning it with human behavior. Limiting the model's memory reduced success in longer searches, mirroring human performance. We validated these improvements by comparing our model with a held-out set within the HSEM and with other models in a separate visual search benchmark. Overall, the new nnELM model not only handles Hybrid search in natural scenes but also closely replicates human behavior, advancing our understanding of search processes while maintaining interpretability.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10288','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10288\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Visual search is crucial in daily human interaction with the environment. Hybrid search extends this by requiring observers to find any item from a given set. Recently, a few models were proposed to simulate human eye movements in visual search tasks within natural scenes, but none were implemented for Hybrid search under similar conditions. We present an enhanced neural network Entropy Limit Minimization (nnELM) model, grounded in a Bayesian framework and signal detection theory, and the Hybrid Search Eye Movements (HSEM) Dataset, containing thousands of human eye movements during hybrid tasks. A key Hybrid search challenge is that participants have to look for different objects at the same time. To address this, we developed several strategies involving the posterior probability distributions after each fixation. Adjusting peripheral visibility improved early-stage efficiency, aligning it with human behavior. Limiting the model's memory reduced success in longer searches, mirroring human performance. We validated these improvements by comparing our model with a held-out set within the HSEM and with other models in a separate visual search benchmark. Overall, the new nnELM model not only handles Hybrid search in natural scenes but also closely replicates human behavior, advancing our understanding of search processes while maintaining interpretability.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10288','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10288\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-00272-3\" title=\"Follow DOI:10.1038\/s41598-025-00272-3\" target=\"_blank\">doi:10.1038\/s41598-025-00272-3<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10288','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Martin Rolfs; Richard Schweitzer; Eric Castet; Tamara L. Watson; Sven Ohl<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10190','tp_abstract')\" style=\"cursor:pointer;\">Lawful kinematics link eye movements to the limits of high-speed perception<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201317, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10190\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10190','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10190\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10190','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10190\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10190','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10190\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Rolfs2025,<br \/>\r\ntitle = {Lawful kinematics link eye movements to the limits of high-speed perception},<br \/>\r\nauthor = {Martin Rolfs and Richard Schweitzer and Eric Castet and Tamara L. Watson and Sven Ohl},<br \/>\r\ndoi = {10.1038\/s41467-025-58659-9},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201317},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Perception requires active sampling of the environment. What part of the physical world can be perceived is limited by the sensory system's biophysical setup, but might be further constrained by the kinematic bounds of the motor actions used to acquire sensory information. Here, we tested this fundamental idea for humans' fastest and most frequent behavior\u2014saccadic eye movements\u2014which entail incidental sensory consequences (i.e., swift retinal motion) that rarely reach awareness in natural vision. Using high-speed video projection, we display rapidly moving stimuli that faithfully reproduce, or deviate from, saccades' lawful relation of velocity, duration, and amplitude. For each stimulus, observers perform perceptual tasks for which performance is contingent on consciously seeing the stimulus' motion trajectory. We uncover that visibility of the stimulus' movement is well predicted by the specific kinematics of saccades and their sensorimotor contingencies, reflecting even variability between individual observers. Computational modeling shows that spatiotemporal integration during early visual processing predicts this lawful relation in a tight range of biologically plausible parameters. These results suggest that the visual system takes into account motor kinematics when omitting an action's incidental sensory consequences, thereby preserving visual sensitivity to high-speed object motion.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10190','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10190\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Perception requires active sampling of the environment. What part of the physical world can be perceived is limited by the sensory system's biophysical setup, but might be further constrained by the kinematic bounds of the motor actions used to acquire sensory information. Here, we tested this fundamental idea for humans' fastest and most frequent behavior\u2014saccadic eye movements\u2014which entail incidental sensory consequences (i.e., swift retinal motion) that rarely reach awareness in natural vision. Using high-speed video projection, we display rapidly moving stimuli that faithfully reproduce, or deviate from, saccades' lawful relation of velocity, duration, and amplitude. For each stimulus, observers perform perceptual tasks for which performance is contingent on consciously seeing the stimulus' motion trajectory. We uncover that visibility of the stimulus' movement is well predicted by the specific kinematics of saccades and their sensorimotor contingencies, reflecting even variability between individual observers. Computational modeling shows that spatiotemporal integration during early visual processing predicts this lawful relation in a tight range of biologically plausible parameters. These results suggest that the visual system takes into account motor kinematics when omitting an action's incidental sensory consequences, thereby preserving visual sensitivity to high-speed object motion.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10190','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10190\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-58659-9\" title=\"Follow DOI:10.1038\/s41467-025-58659-9\" target=\"_blank\">doi:10.1038\/s41467-025-58659-9<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10190','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Jonathan Edward Robinson; Andrew W. Corcoran; Christopher J. Whyte; Andr\u00e1s S\u00e1rk\u00f6zy; Anil K. Seth; Gyula Kov\u00e1cs; Karl J. Friston; Cyriel M. A. Pennartz; Giulio Tononi; Jakob Hohwy<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10142','tp_abstract')\" style=\"cursor:pointer;\">The role of active inference in conscious awareness<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">PLoS One, <\/span><span class=\"tp_pub_additional_volume\">vol. 20, <\/span><span class=\"tp_pub_additional_number\">no. 12, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201320, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10142\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10142','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10142\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10142','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10142\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10142','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10142\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Robinson2025,<br \/>\r\ntitle = {The role of active inference in conscious awareness},<br \/>\r\nauthor = {Jonathan Edward Robinson and Andrew W. Corcoran and Christopher J. Whyte and Andr\u00e1s S\u00e1rk\u00f6zy and Anil K. Seth and Gyula Kov\u00e1cs and Karl J. Friston and Cyriel M. A. Pennartz and Giulio Tononi and Jakob Hohwy},<br \/>\r\ndoi = {10.1371\/journal.pone.0328836},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {PLoS One},<br \/>\r\nvolume = {20},<br \/>\r\nnumber = {12},<br \/>\r\npages = {1\u201320},<br \/>\r\nabstract = {Active inference, a first-principles framework for modelling the behaviour of sentient agents, is beginning to be applied in consciousness research. One hypothesis arising from the framework is that active inference is necessary for changes in conscious content. As one component of an extensive adversarial collaboration among competing theories of consciousness, active inference will be contrasted with two other theories of consciousness, neither of which posit that active inference is necessary for consciousness. Here, we thus present a Study Protocol designed to test the active inference hypothesis using a carefully controlled adaptation of the motion-induced blindness paradigm, where an 'active' condition with richer active inference is contrasted with a 'passive' condition. In the active condition, participants direct their gaze towards a target stimulus following its disappearance from consciousness, and report on its subsequent reappearance. In the passive condition, participants maintain central fixation, while the stimulus array is moved across the visual field (in a replay of the active condition based on eye-tracking data acquired during active trials). In two experiments, we plan to investigate target reappearance across active and passive conditions to evaluate the contribution of active inference to conscious awareness. Results will eventually be considered in the context of all the experiments conducted as part of the overall adversarial collaboration.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10142','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10142\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Active inference, a first-principles framework for modelling the behaviour of sentient agents, is beginning to be applied in consciousness research. One hypothesis arising from the framework is that active inference is necessary for changes in conscious content. As one component of an extensive adversarial collaboration among competing theories of consciousness, active inference will be contrasted with two other theories of consciousness, neither of which posit that active inference is necessary for consciousness. Here, we thus present a Study Protocol designed to test the active inference hypothesis using a carefully controlled adaptation of the motion-induced blindness paradigm, where an 'active' condition with richer active inference is contrasted with a 'passive' condition. In the active condition, participants direct their gaze towards a target stimulus following its disappearance from consciousness, and report on its subsequent reappearance. In the passive condition, participants maintain central fixation, while the stimulus array is moved across the visual field (in a replay of the active condition based on eye-tracking data acquired during active trials). In two experiments, we plan to investigate target reappearance across active and passive conditions to evaluate the contribution of active inference to conscious awareness. Results will eventually be considered in the context of all the experiments conducted as part of the overall adversarial collaboration.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10142','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10142\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1371\/journal.pone.0328836\" title=\"Follow DOI:10.1371\/journal.pone.0328836\" target=\"_blank\">doi:10.1371\/journal.pone.0328836<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10142','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Ping Ran; Meng Ying Sun; Qian Sun; Qi Sun<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9860','tp_abstract')\" style=\"cursor:pointer;\">Effects of local information and egocentric reference frames on estimation of biological motion direction<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Psychological Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 89, <\/span><span class=\"tp_pub_additional_number\">no. 6, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201317, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9860\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9860','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9860\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9860','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9860\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9860','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9860\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Ran2025a,<br \/>\r\ntitle = {Effects of local information and egocentric reference frames on estimation of biological motion direction},<br \/>\r\nauthor = {Ping Ran and Meng Ying Sun and Qian Sun and Qi Sun},<br \/>\r\ndoi = {10.1007\/s00426-025-02208-y},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Psychological Research},<br \/>\r\nvolume = {89},<br \/>\r\nnumber = {6},<br \/>\r\npages = {1\u201317},<br \/>\r\npublisher = {Springer Science and Business Media Deutschland GmbH},<br \/>\r\nabstract = {Previous studies have established that coarse discrimination (e.g., left\/right, forward\/backward) of point-light walker (PLW) direction is modulated by multiple factors including global\/local motion information, biological\/social factors, and egocentric reference frames. However, the specific contributions of local motion information and egocentric referencing to fine-grained PLW direction estimation remain unclear. Drawing upon principles of biomechanical asymmetry and right-lateralized motor dominance, we hypothesized a systematic overall rightward bias in PLW direction estimation. Through three carefully controlled experiments, we demonstrated that: (1) right-handed participants showed consistently overall rightward estimation bias; (2) this bias was selectively enhanced by right-sided body stimuli while remaining unaffected by left-sided stimuli; and (3) spatial decoupling of stimulus center from egocentric coordinates revealed persistent egocentric coding in the direction estimation. Moreover, prolonged stimulus exposure led to expanded gaze distribution alongside heightened local information processing, underscoring the pivotal role of local information. These findings suggest that biomechanical asymmetries may shape PLW direction perception and reveal the interplay between local information analysis and egocentric referencing in fine-grained biological motion estimation.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9860','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9860\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Previous studies have established that coarse discrimination (e.g., left\/right, forward\/backward) of point-light walker (PLW) direction is modulated by multiple factors including global\/local motion information, biological\/social factors, and egocentric reference frames. However, the specific contributions of local motion information and egocentric referencing to fine-grained PLW direction estimation remain unclear. Drawing upon principles of biomechanical asymmetry and right-lateralized motor dominance, we hypothesized a systematic overall rightward bias in PLW direction estimation. Through three carefully controlled experiments, we demonstrated that: (1) right-handed participants showed consistently overall rightward estimation bias; (2) this bias was selectively enhanced by right-sided body stimuli while remaining unaffected by left-sided stimuli; and (3) spatial decoupling of stimulus center from egocentric coordinates revealed persistent egocentric coding in the direction estimation. Moreover, prolonged stimulus exposure led to expanded gaze distribution alongside heightened local information processing, underscoring the pivotal role of local information. These findings suggest that biomechanical asymmetries may shape PLW direction perception and reveal the interplay between local information analysis and egocentric referencing in fine-grained biological motion estimation.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9860','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9860\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s00426-025-02208-y\" title=\"Follow DOI:10.1007\/s00426-025-02208-y\" target=\"_blank\">doi:10.1007\/s00426-025-02208-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9860','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Rajani Raman; Anna Bogn\u00e1r; Ghazaleh Ghamkhari Nejad; Albert Mukovskiy; Lucas Martini; Martin Giese; Rufin Vogels<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9838','tp_abstract')\" style=\"cursor:pointer;\">Keypoint-based modeling reveals fine-grained body pose tuning in superior temporal sulcus neurons<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201316, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9838\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9838','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9838\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9838','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9838\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9838','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9838\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Raman2025,<br \/>\r\ntitle = {Keypoint-based modeling reveals fine-grained body pose tuning in superior temporal sulcus neurons},<br \/>\r\nauthor = {Rajani Raman and Anna Bogn\u00e1r and Ghazaleh Ghamkhari Nejad and Albert Mukovskiy and Lucas Martini and Martin Giese and Rufin Vogels},<br \/>\r\ndoi = {10.1038\/s41467-025-60945-5},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201316},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Body pose and orientation serve as vital visual signals in primate non-verbal social communication. Leveraging deep learning algorithms that extract body poses from videos of behaving monkeys, applied to a monkey avatar, we investigated neural tuning for pose and viewpoint, targeting fMRI-defined mid and anterior Superior Temporal Sulcus (STS) body patches. We modeled the pose and viewpoint selectivity of the units with keypoint-based principal component regression with cross-validation and applied model inversion as a key approach to identify effective body parts and views. Mid STS units were effectively modeled using view-dependent 2D keypoint representations, revealing that their responses were driven by specific body parts that differed among neurons. Some anterior STS units exhibited better predictive performances with a view-dependent 3D model. On average, anterior STS units were better fitted by a keypoint-based model incorporating mirror-symmetric viewpoint tuning than by view-dependent 2D and 3D keypoint models. However, in both regions, a view-independent keypoint model resulted in worse predictive performance. This keypoint-based approach provides insights into how the primate visual system encodes socially relevant body cues, deepening our understanding of body pose representation in the STS.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9838','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9838\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Body pose and orientation serve as vital visual signals in primate non-verbal social communication. Leveraging deep learning algorithms that extract body poses from videos of behaving monkeys, applied to a monkey avatar, we investigated neural tuning for pose and viewpoint, targeting fMRI-defined mid and anterior Superior Temporal Sulcus (STS) body patches. We modeled the pose and viewpoint selectivity of the units with keypoint-based principal component regression with cross-validation and applied model inversion as a key approach to identify effective body parts and views. Mid STS units were effectively modeled using view-dependent 2D keypoint representations, revealing that their responses were driven by specific body parts that differed among neurons. Some anterior STS units exhibited better predictive performances with a view-dependent 3D model. On average, anterior STS units were better fitted by a keypoint-based model incorporating mirror-symmetric viewpoint tuning than by view-dependent 2D and 3D keypoint models. However, in both regions, a view-independent keypoint model resulted in worse predictive performance. This keypoint-based approach provides insights into how the primate visual system encodes socially relevant body cues, deepening our understanding of body pose representation in the STS.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9838','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9838\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-60945-5\" title=\"Follow DOI:10.1038\/s41467-025-60945-5\" target=\"_blank\">doi:10.1038\/s41467-025-60945-5<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9838','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Estelle Raffin; Michele Bevilacqua; Fabienne Windel; Pauline Menoud; Roberto F. Salamanca-Giron; Sarah Feroldi; Sarah B. Zandvliet; Nicola Ramdass; Laurijn Draaisma; Patrik Vuilleumier; Adrian G Guggisberg; Christophe Bonvin; Lisa Fleury; Krystel R. Huxlin; Elena Beanato; Friedhelm C. Hummel<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9801','tp_abstract')\" style=\"cursor:pointer;\">Boosting hemianopia recovery: The power of interareal cross-frequency brain stimulation<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Brain, <\/span><span class=\"tp_pub_additional_volume\">vol. 148, <\/span><span class=\"tp_pub_additional_pages\">pp. 4548\u20134561, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9801\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9801','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9801\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9801','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9801\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9801','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9801\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Raffin2025,<br \/>\r\ntitle = {Boosting hemianopia recovery: The power of interareal cross-frequency brain stimulation},<br \/>\r\nauthor = {Estelle Raffin and Michele Bevilacqua and Fabienne Windel and Pauline Menoud and Roberto F. Salamanca-Giron and Sarah Feroldi and Sarah B. Zandvliet and Nicola Ramdass and Laurijn Draaisma and Patrik Vuilleumier and Adrian G Guggisberg and Christophe Bonvin and Lisa Fleury and Krystel R. Huxlin and Elena Beanato and Friedhelm C. Hummel},<br \/>\r\ndoi = {10.1093\/brain\/awaf252},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Brain},<br \/>\r\nvolume = {148},<br \/>\r\npages = {4548\u20134561},<br \/>\r\npublisher = {Oxford University Press (OUP)},<br \/>\r\nabstract = {Visual field loss is a common consequence of stroke and manifests in approximatively one-third of patients in the chronic stage. Such loss can significantly impact daily life activities, compromising tasks such as reading, navigating or driving. Although slow and labour intensive, evidence suggests that early interventions with tailored rehabilitation programmes might stimulate visual recovery and improve quality of life in stroke survivors.To enhance the effects of such rehabilitation programmes, we designed a novel, non-invasive, pathway-specific, physiology-inspired cross-frequency brain stimulation protocol, where complex oscillatory signal integration was inferred from phase\u2013amplitude coupling of oscillatory signals between the primary visual cortex and the motion-sensitive medio-temporal area. Sixteen stroke patients were enrolled in a double-blind, randomized, cross-over trial, during which they performed two blocks of 10 daily training sessions of a direction discrimination task, combined with one of the two cross-frequency transcranial alternative brain stimulation (cf-tACS versus control cf-tACS) conditions.We found that the cf-tACS condition promoting feedforward visual inputs to the medio-temporal area significantly enhanced motion discrimination performance and shifted visual field borders (i.e. through localized enlargement of isopters). Behavioural improvements associated with a change in oscillatory activity within motion processing pathways were proportional to the amount of residual structural fibres along these pathways and perilesional primary visual cortex activity. In sum, we report, for the first time, that cf-tACS, a novel, pathway-specific, physiology-inspired brain stimulation approach, is able to boost the efficacy of perceptual training, restoring visual motion processing and reducing the severity of visual impairments in adult stroke patients.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9801','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9801\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Visual field loss is a common consequence of stroke and manifests in approximatively one-third of patients in the chronic stage. Such loss can significantly impact daily life activities, compromising tasks such as reading, navigating or driving. Although slow and labour intensive, evidence suggests that early interventions with tailored rehabilitation programmes might stimulate visual recovery and improve quality of life in stroke survivors.To enhance the effects of such rehabilitation programmes, we designed a novel, non-invasive, pathway-specific, physiology-inspired cross-frequency brain stimulation protocol, where complex oscillatory signal integration was inferred from phase\u2013amplitude coupling of oscillatory signals between the primary visual cortex and the motion-sensitive medio-temporal area. Sixteen stroke patients were enrolled in a double-blind, randomized, cross-over trial, during which they performed two blocks of 10 daily training sessions of a direction discrimination task, combined with one of the two cross-frequency transcranial alternative brain stimulation (cf-tACS versus control cf-tACS) conditions.We found that the cf-tACS condition promoting feedforward visual inputs to the medio-temporal area significantly enhanced motion discrimination performance and shifted visual field borders (i.e. through localized enlargement of isopters). Behavioural improvements associated with a change in oscillatory activity within motion processing pathways were proportional to the amount of residual structural fibres along these pathways and perilesional primary visual cortex activity. In sum, we report, for the first time, that cf-tACS, a novel, pathway-specific, physiology-inspired brain stimulation approach, is able to boost the efficacy of perceptual training, restoring visual motion processing and reducing the severity of visual impairments in adult stroke patients.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9801','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9801\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1093\/brain\/awaf252\" title=\"Follow DOI:10.1093\/brain\/awaf252\" target=\"_blank\">doi:10.1093\/brain\/awaf252<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9801','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Vanessa C. Radtke; Wanja Wolff; Corinna S. Martarelli<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9791','tp_abstract')\" style=\"cursor:pointer;\">How effortful is boredom? Studying self-control demands through pupillometry<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Collabra: Psychology, <\/span><span class=\"tp_pub_additional_volume\">vol. 11, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201324, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9791\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9791','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9791\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9791','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9791\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9791','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9791\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Radtke2025,<br \/>\r\ntitle = {How effortful is boredom? Studying self-control demands through pupillometry},<br \/>\r\nauthor = {Vanessa C. Radtke and Wanja Wolff and Corinna S. Martarelli},<br \/>\r\neditor = {Don Ravenzwaaij},<br \/>\r\ndoi = {10.1525\/collabra.151266},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Collabra: Psychology},<br \/>\r\nvolume = {11},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201324},<br \/>\r\nabstract = {Self-control is essential for managing actions, yet its exertion is perceived as effortful. Performing a task may require effort not only because of its inherent difficulty but also due to its potential for inducing boredom, as boredom has been shown to be self-control demanding itself. So far, the extent of self-control demands during boredom and its temporal dynamics remain elusive. We employed a multimethod approach to address this knowledge gap. Ninety-five participants took part in an easy and hard version of the Stroop task. During both tasks, they indicated several times their perceived task difficulty, boredom, boredom-related effort, difficulty-related effort, overall effort, and fatigue. We tested whether pupil size, as a physiological indicator of cognitive effort, was predicted more accurately by difficulty- and boredom-related effort together than by task-difficulty-related effort alone. The best model fit included boredom-, difficulty-related effort, and their interactions with task type (easy, hard Stroop). Tonic pupil size increased during the easy Stroop, while phasic pupil size decreased with greater boredom-related effort in both tasks. Greater difficulty-related effort was linked to increases in tonic and phasic pupil size in the easy, but not in the hard Stroop. Finally, boredom-related effort in the Stroop predicted performance in a subsequent flanker task. Our results provide preliminary support that enduring boredom may not only be perceived as effortful but also be reflected in psychophysiological changes. Moreover, it may influence subsequent behavior. This underscores the importance of considering boredom as a potential confound in self-control research and broader study designs.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9791','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9791\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Self-control is essential for managing actions, yet its exertion is perceived as effortful. Performing a task may require effort not only because of its inherent difficulty but also due to its potential for inducing boredom, as boredom has been shown to be self-control demanding itself. So far, the extent of self-control demands during boredom and its temporal dynamics remain elusive. We employed a multimethod approach to address this knowledge gap. Ninety-five participants took part in an easy and hard version of the Stroop task. During both tasks, they indicated several times their perceived task difficulty, boredom, boredom-related effort, difficulty-related effort, overall effort, and fatigue. We tested whether pupil size, as a physiological indicator of cognitive effort, was predicted more accurately by difficulty- and boredom-related effort together than by task-difficulty-related effort alone. The best model fit included boredom-, difficulty-related effort, and their interactions with task type (easy, hard Stroop). Tonic pupil size increased during the easy Stroop, while phasic pupil size decreased with greater boredom-related effort in both tasks. Greater difficulty-related effort was linked to increases in tonic and phasic pupil size in the easy, but not in the hard Stroop. Finally, boredom-related effort in the Stroop predicted performance in a subsequent flanker task. Our results provide preliminary support that enduring boredom may not only be perceived as effortful but also be reflected in psychophysiological changes. Moreover, it may influence subsequent behavior. This underscores the importance of considering boredom as a potential confound in self-control research and broader study designs.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9791','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9791\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1525\/collabra.151266\" title=\"Follow DOI:10.1525\/collabra.151266\" target=\"_blank\">doi:10.1525\/collabra.151266<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9791','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Katrina R. Quinn; Florian Sandhaeger; Nima Noury; Ema Zezelic; Markus Siegel<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9770','tp_abstract')\" style=\"cursor:pointer;\">Abstract choice representations during stable choice-response associations<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Communications Biology, <\/span><span class=\"tp_pub_additional_volume\">vol. 8, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20138, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9770\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9770','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9770\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9770','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9770\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9770','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9770\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Quinn2025,<br \/>\r\ntitle = {Abstract choice representations during stable choice-response associations},<br \/>\r\nauthor = {Katrina R. Quinn and Florian Sandhaeger and Nima Noury and Ema Zezelic and Markus Siegel},<br \/>\r\ndoi = {10.1038\/s42003-025-08129-1},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Communications Biology},<br \/>\r\nvolume = {8},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u20138},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {An increasing body of evidence has demonstrated neural representations of choices independent of the motor actions used to report them \u2013 so-called abstract choices. However, it remains unclear whether such representations arise due to dynamic changes in choice-response associations or reflect a general property of decision-making. Here, we show that in the human brain, choices are represented abstractly even when choice-response associations remain stable over time. We recorded neural activity using magnetoencephalography while participants performed a motion discrimination task, with choice-response mappings held constant within blocks. We found neural information about participants' perceptual choices independent of both motor response and visual stimulus. Choice information increased during the stimulus and peaked after the response. Moreover, choice and response information showed distinct cortical distributions, with choice-related signals strongest in frontoparietal regions. Thus, abstract choice representations are not limited to dynamic or action-independent contexts and may be a general feature of decision-making.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9770','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9770\" style=\"display:none;\"><div class=\"tp_abstract_entry\">An increasing body of evidence has demonstrated neural representations of choices independent of the motor actions used to report them \u2013 so-called abstract choices. However, it remains unclear whether such representations arise due to dynamic changes in choice-response associations or reflect a general property of decision-making. Here, we show that in the human brain, choices are represented abstractly even when choice-response associations remain stable over time. We recorded neural activity using magnetoencephalography while participants performed a motion discrimination task, with choice-response mappings held constant within blocks. We found neural information about participants' perceptual choices independent of both motor response and visual stimulus. Choice information increased during the stimulus and peaked after the response. Moreover, choice and response information showed distinct cortical distributions, with choice-related signals strongest in frontoparietal regions. Thus, abstract choice representations are not limited to dynamic or action-independent contexts and may be a general feature of decision-making.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9770','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9770\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s42003-025-08129-1\" title=\"Follow DOI:10.1038\/s42003-025-08129-1\" target=\"_blank\">doi:10.1038\/s42003-025-08129-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9770','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Ying Que; Yueyuan Zheng; Janet H. Hsiao; Xiao Hu<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9762','tp_abstract')\" style=\"cursor:pointer;\">Using eye movements, electrodermal activities, and heart rates to predict different types of cognitive load during reading with background music<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201312, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9762\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9762','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9762\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9762','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9762\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9762','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9762\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Que2025a,<br \/>\r\ntitle = {Using eye movements, electrodermal activities, and heart rates to predict different types of cognitive load during reading with background music},<br \/>\r\nauthor = {Ying Que and Yueyuan Zheng and Janet H. Hsiao and Xiao Hu},<br \/>\r\ndoi = {10.1038\/s41598-025-03052-1},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201312},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {The triarchic model of cognitive load postulates three types of cognitive load\u2014extraneous, intrinsic, and germane load. While various approaches have been proposed to measure the three types of cognitive load, most measurements are intrusive. To address this issue, we leveraged multimodal learning analytics to collect eye movement (EM), electrodermal activity (EDA), heart rate (HR), and heart rate variability (HRV) from non-intrusive sensors and investigate whether they could predict the three types of cognitive load. We examined extraneous load (created by adding background music (BGM)), intrinsic load (created by text complexity), and germane load (reflected by comprehension accuracy) in a novel reading context with self-selected preferred BGM. One hundred and two (102) non-native English speakers were recruited. Half of them read English passages with BGM, while the other half read in silence. Results of logistic regression indicated that EM measures were predictive of the three load types, while HR\/HRV measures predicted extraneous and germane load. Our findings provide evidence supporting the triarchic structure of cognitive load theory and implications for the design of non-intrusive measurement of cognitive load.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9762','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9762\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The triarchic model of cognitive load postulates three types of cognitive load\u2014extraneous, intrinsic, and germane load. While various approaches have been proposed to measure the three types of cognitive load, most measurements are intrusive. To address this issue, we leveraged multimodal learning analytics to collect eye movement (EM), electrodermal activity (EDA), heart rate (HR), and heart rate variability (HRV) from non-intrusive sensors and investigate whether they could predict the three types of cognitive load. We examined extraneous load (created by adding background music (BGM)), intrinsic load (created by text complexity), and germane load (reflected by comprehension accuracy) in a novel reading context with self-selected preferred BGM. One hundred and two (102) non-native English speakers were recruited. Half of them read English passages with BGM, while the other half read in silence. Results of logistic regression indicated that EM measures were predictive of the three load types, while HR\/HRV measures predicted extraneous and germane load. Our findings provide evidence supporting the triarchic structure of cognitive load theory and implications for the design of non-intrusive measurement of cognitive load.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9762','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9762\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-03052-1\" title=\"Follow DOI:10.1038\/s41598-025-03052-1\" target=\"_blank\">doi:10.1038\/s41598-025-03052-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9762','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Sorin Pojoga; Ariana Andrei; Valentin Dragoi<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9526','tp_abstract')\" style=\"cursor:pointer;\">Unsupervised learning of temporal regularities in visual cortical populations<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201312, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9526\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9526','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9526\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9526','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9526\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9526','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9526\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Pojoga2025,<br \/>\r\ntitle = {Unsupervised learning of temporal regularities in visual cortical populations},<br \/>\r\nauthor = {Sorin Pojoga and Ariana Andrei and Valentin Dragoi},<br \/>\r\ndoi = {10.1038\/s41467-025-60731-3},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201312},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {The brain's ability to extract temporal information from dynamic stimuli in the environment is essential for everyday behavior. To extract temporal statistical regularities, neural circuits must possess the ability to measure, produce, and anticipate sensory events. Here we report that when neural populations in macaque primary visual cortex are triggered to exhibit a periodic response to a repetitive sequence of optogenetic laser flashes, they learn to accurately reproduce the temporal sequence even when light stimulation is turned off. Despite the fact that individual cells had a poor capacity to extract temporal information, the population of neurons reproduced the periodic sequence in a temporally precise manner. The same neural population could learn different frequencies of external stimulation, and the ability to extract temporal information was found in all cortical layers. These results demonstrate a remarkable ability of sensory cortical populations to extract and reproduce complex temporal structure from unsupervised external stimulation even when stimuli are perceptually irrelevant.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9526','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9526\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The brain's ability to extract temporal information from dynamic stimuli in the environment is essential for everyday behavior. To extract temporal statistical regularities, neural circuits must possess the ability to measure, produce, and anticipate sensory events. Here we report that when neural populations in macaque primary visual cortex are triggered to exhibit a periodic response to a repetitive sequence of optogenetic laser flashes, they learn to accurately reproduce the temporal sequence even when light stimulation is turned off. Despite the fact that individual cells had a poor capacity to extract temporal information, the population of neurons reproduced the periodic sequence in a temporally precise manner. The same neural population could learn different frequencies of external stimulation, and the ability to extract temporal information was found in all cortical layers. These results demonstrate a remarkable ability of sensory cortical populations to extract and reproduce complex temporal structure from unsupervised external stimulation even when stimuli are perceptually irrelevant.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9526','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9526\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-60731-3\" title=\"Follow DOI:10.1038\/s41467-025-60731-3\" target=\"_blank\">doi:10.1038\/s41467-025-60731-3<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9526','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Marek Placi\u0144ski; Theresa Matzinger<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9494','tp_abstract')\" style=\"cursor:pointer;\">Structural alignment leads to lower cognitive load in a collaborative task<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, <\/span><span class=\"tp_pub_additional_volume\">vol. 26, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 102\u2013129, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9494\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9494','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9494\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9494','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9494\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9494','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9494\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Placinski2025,<br \/>\r\ntitle = {Structural alignment leads to lower cognitive load in a collaborative task},<br \/>\r\nauthor = {Marek Placi\u0144ski and Theresa Matzinger},<br \/>\r\ndoi = {10.1075\/is.24029.pla},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems},<br \/>\r\nvolume = {26},<br \/>\r\nnumber = {1},<br \/>\r\npages = {102\u2013129},<br \/>\r\nabstract = {&lt;p&gt;One of the characteristics of dialogue is that interlocutors tend to converge on the same linguistic choices, called alignment. In this paper, we aim to investigate whether structural alignment \u2014 the tendency to use the same syntactic structures \u2014 has a positive effect on cognitive load and task completion in a task-based conversation. To do so, we engage participants in a collaborative task where they have to interact with another interlocutor (actually a bot) and inform each other about the location of landmarks on a map. In one condition the bot aligns with the participant and in the other it does not. Participants are recorded with an eye tracker during the experiment so that we can evaluate cognitive load and performance in the task. We found that when participants interact with an aligning bot, their cognitive load decreases and task completion is facilitated, but only to a certain degree. The results of the study suggest that alignment is a strategy that can be used in order to facilitate task performance.&lt;\/p&gt;},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9494','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9494\" style=\"display:none;\"><div class=\"tp_abstract_entry\">&lt;p&gt;One of the characteristics of dialogue is that interlocutors tend to converge on the same linguistic choices, called alignment. In this paper, we aim to investigate whether structural alignment \u2014 the tendency to use the same syntactic structures \u2014 has a positive effect on cognitive load and task completion in a task-based conversation. To do so, we engage participants in a collaborative task where they have to interact with another interlocutor (actually a bot) and inform each other about the location of landmarks on a map. In one condition the bot aligns with the participant and in the other it does not. Participants are recorded with an eye tracker during the experiment so that we can evaluate cognitive load and performance in the task. We found that when participants interact with an aligning bot, their cognitive load decreases and task completion is facilitated, but only to a certain degree. The results of the study suggest that alignment is a strategy that can be used in order to facilitate task performance.&lt;\/p&gt;<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9494','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9494\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1075\/is.24029.pla\" title=\"Follow DOI:10.1075\/is.24029.pla\" target=\"_blank\">doi:10.1075\/is.24029.pla<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9494','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Oria Pitem; Yaniv Mama<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9485','tp_abstract')\" style=\"cursor:pointer;\">Predicting long-term memory via pupillometry<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 15, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9485\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9485','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9485\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9485','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9485\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9485','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9485\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Pitem2025,<br \/>\r\ntitle = {Predicting long-term memory via pupillometry},<br \/>\r\nauthor = {Oria Pitem and Yaniv Mama},<br \/>\r\ndoi = {10.1038\/s41598-025-09703-7},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {15},<br \/>\r\nnumber = {1},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Pupillometry research has established that pupil size reflects cognitive processes through autonomic nervous system activity, with high arousal triggering pupil dilation. Studies examining pupil size during encoding have yielded conflicting results regarding its relationship with subsequent memory performance, and few have investigated baseline pupil size. This study examined whether pupil diameter before and during stimulus presentation predicts memory performance. We hypothesized that successfully recalled words would be associated with larger pupils than forgotten words, based on the role of arousal and attention in memory formation. To test these hypotheses, we conducted two experiments in which we tracked ninety-five psychology students' eyes while they performed a long-term memory test. The results depict larger pupil size while studying later successfully retrieved words. Interestingly, this phenomenon also occurs before word presentation (during baseline), which supports the \u201creadiness to remember\u201d (R2R) framework. This implies that pupillary changes while preparing to encode information can indicate later memory performance.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9485','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9485\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Pupillometry research has established that pupil size reflects cognitive processes through autonomic nervous system activity, with high arousal triggering pupil dilation. Studies examining pupil size during encoding have yielded conflicting results regarding its relationship with subsequent memory performance, and few have investigated baseline pupil size. This study examined whether pupil diameter before and during stimulus presentation predicts memory performance. We hypothesized that successfully recalled words would be associated with larger pupils than forgotten words, based on the role of arousal and attention in memory formation. To test these hypotheses, we conducted two experiments in which we tracked ninety-five psychology students' eyes while they performed a long-term memory test. The results depict larger pupil size while studying later successfully retrieved words. Interestingly, this phenomenon also occurs before word presentation (during baseline), which supports the \u201creadiness to remember\u201d (R2R) framework. This implies that pupillary changes while preparing to encode information can indicate later memory performance.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9485','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9485\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-025-09703-7\" title=\"Follow DOI:10.1038\/s41598-025-09703-7\" target=\"_blank\">doi:10.1038\/s41598-025-09703-7<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9485','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Zhongling Pi; Jingjing Dong; Jiayu Wang; Xiying Li; Xin Zhao<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9410','tp_abstract')\" style=\"cursor:pointer;\">Modality matters: How combining oral and written instructional explanations improves STEM learning from video lectures<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">International Journal of STEM Education, <\/span><span class=\"tp_pub_additional_volume\">vol. 12, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201319, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9410\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9410','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9410\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9410','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9410\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9410','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9410\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Pi2025a,<br \/>\r\ntitle = {Modality matters: How combining oral and written instructional explanations improves STEM learning from video lectures},<br \/>\r\nauthor = {Zhongling Pi and Jingjing Dong and Jiayu Wang and Xiying Li and Xin Zhao},<br \/>\r\ndoi = {10.1186\/s40594-025-00539-1},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {International Journal of STEM Education},<br \/>\r\nvolume = {12},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201319},<br \/>\r\npublisher = {Springer Science and Business Media Deutschland GmbH},<br \/>\r\nabstract = {Background and purpose of the study: STEM learning often involves a multitude of complex and abstract concepts and ideas that can be challenging for students to comprehend. Research suggests that the oral and visual representations in video lectures can maximize students' cognitive infrastructure, helping them to organize knowledge more effectively. However, compared to traditional learning methods, video lectures may lack interaction and feedback, which can lead to ineffective learning strategies (e.g., passive viewing) and reduced learning engagement. Instructional explanations serve as a generative strategy, enabling students to create oral and written pieces based on the knowledge gained from video lectures and their prior knowledge. This study recruited a total of 87 undergraduate students and explored how the modality of instructional explanations generated by these students for a fictious student influenced their learning. Specifically, the study explored the effects on students' learning performance, attention, behavioral patterns of preparing-to-explain, the quality of notes, and the quality of instructional explanations in video lectures on a STEM subject. Results: The results revealed that students who adopted a combination of oral and written instructional explanations showed better immediate retention and transfer than those adopted just one type of explanation. In addition, both oral-only and combined oral-and-written explanations promoted more self-regulated learning behaviors during the phase of preparing-to-explain. The study also found that the quality of instructional explanations played a mediating role in the effects of modality. Conclusions and potential implications: Our findings suggest that combining oral and written instructional explanations is more effective in supporting students' STEM learning from video lectures compared to using a single form of explanation. These findings have significant implications for teaching and learning STEM subjects through video lectures. Students and educators should recognize the complementary roles of oral and written instructional explanations and opt for a combined oral-and-written approach during STEM learning activities.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9410','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9410\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Background and purpose of the study: STEM learning often involves a multitude of complex and abstract concepts and ideas that can be challenging for students to comprehend. Research suggests that the oral and visual representations in video lectures can maximize students' cognitive infrastructure, helping them to organize knowledge more effectively. However, compared to traditional learning methods, video lectures may lack interaction and feedback, which can lead to ineffective learning strategies (e.g., passive viewing) and reduced learning engagement. Instructional explanations serve as a generative strategy, enabling students to create oral and written pieces based on the knowledge gained from video lectures and their prior knowledge. This study recruited a total of 87 undergraduate students and explored how the modality of instructional explanations generated by these students for a fictious student influenced their learning. Specifically, the study explored the effects on students' learning performance, attention, behavioral patterns of preparing-to-explain, the quality of notes, and the quality of instructional explanations in video lectures on a STEM subject. Results: The results revealed that students who adopted a combination of oral and written instructional explanations showed better immediate retention and transfer than those adopted just one type of explanation. In addition, both oral-only and combined oral-and-written explanations promoted more self-regulated learning behaviors during the phase of preparing-to-explain. The study also found that the quality of instructional explanations played a mediating role in the effects of modality. Conclusions and potential implications: Our findings suggest that combining oral and written instructional explanations is more effective in supporting students' STEM learning from video lectures compared to using a single form of explanation. These findings have significant implications for teaching and learning STEM subjects through video lectures. Students and educators should recognize the complementary roles of oral and written instructional explanations and opt for a combined oral-and-written approach during STEM learning activities.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9410','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9410\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1186\/s40594-025-00539-1\" title=\"Follow DOI:10.1186\/s40594-025-00539-1\" target=\"_blank\">doi:10.1186\/s40594-025-00539-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9410','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Joris Perra; B\u00e9n\u00e9dicte Poulin-Charronnat; Thierry Baccino; Patrick Bard; Philippe Pfister; Philippe Lalitte; Melissa Zerbib; V\u00e9ronique Drai-Zerbib<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9316','tp_abstract')\" style=\"cursor:pointer;\">Saccadic and visuo-motor flexibility towards local parafoveal complexity as a hallmark of expert knowledge-driven processing during sight-reading of music<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Quarterly Journal of Experimental Psychology, <\/span><span class=\"tp_pub_additional_volume\">vol. 78, <\/span><span class=\"tp_pub_additional_number\">no. 12, <\/span><span class=\"tp_pub_additional_pages\">pp. 2660\u20132680, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9316\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9316','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9316\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9316','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9316\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9316','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9316\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Perra2025,<br \/>\r\ntitle = {Saccadic and visuo-motor flexibility towards local parafoveal complexity as a hallmark of expert knowledge-driven processing during sight-reading of music},<br \/>\r\nauthor = {Joris Perra and B\u00e9n\u00e9dicte Poulin-Charronnat and Thierry Baccino and Patrick Bard and Philippe Pfister and Philippe Lalitte and Melissa Zerbib and V\u00e9ronique Drai-Zerbib},<br \/>\r\ndoi = {10.1177\/17470218251325245},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Quarterly Journal of Experimental Psychology},<br \/>\r\nvolume = {78},<br \/>\r\nnumber = {12},<br \/>\r\npages = {2660\u20132680},<br \/>\r\npublisher = {SAGE Publications Ltd},<br \/>\r\nabstract = {Expertise is associated with a knowledge-driven information-processing approach. Experts benefit from long-term knowledge structures\u2014chunks and retrieval structures\/templates\u2014leading them to formulate expectations about local stimulus characteristics and to extract information projected onto distant areas from the fixation location. In an attempt to shed light on the way knowledge-driven processing impacts eye movements during music reading, this study aimed to determine how expert musicians deal with local complexity in a sight-reading task. Thirty musicians from two expertise levels had to sight read 4\u2009bar score excerpts. Local analyses were conducted to investigate how the gaze behaves prior to and during the sight reading of different score characteristics, such as alteration, location of the notes on the staff, note count, and heterogeneity of notes. The more experts (1) were less affected by the foveal load induced by local complexity, showing a lower increase in fixation durations between noncomplex features and local complexity compared to the less experts; (2) presented a saccadic flexibility towards the local complexity projected onto the parafoveal area, being the only group to exhibit shorter progressive incoming saccade sizes on accidentals and larger progressive incoming saccade sizes on new notes compared to noncomplex features; and (3) presented a visuo-motor flexibility depending on the played complexity, being the only group to exhibit a shorter eye-hand span when playing accidentals or distant notes compared to noncomplex features. Overall, this study highlights the usefulness of local analyses as a relevant tool to investigate foveal and parafoveal processing skills during music reading.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9316','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9316\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Expertise is associated with a knowledge-driven information-processing approach. Experts benefit from long-term knowledge structures\u2014chunks and retrieval structures\/templates\u2014leading them to formulate expectations about local stimulus characteristics and to extract information projected onto distant areas from the fixation location. In an attempt to shed light on the way knowledge-driven processing impacts eye movements during music reading, this study aimed to determine how expert musicians deal with local complexity in a sight-reading task. Thirty musicians from two expertise levels had to sight read 4\u2009bar score excerpts. Local analyses were conducted to investigate how the gaze behaves prior to and during the sight reading of different score characteristics, such as alteration, location of the notes on the staff, note count, and heterogeneity of notes. The more experts (1) were less affected by the foveal load induced by local complexity, showing a lower increase in fixation durations between noncomplex features and local complexity compared to the less experts; (2) presented a saccadic flexibility towards the local complexity projected onto the parafoveal area, being the only group to exhibit shorter progressive incoming saccade sizes on accidentals and larger progressive incoming saccade sizes on new notes compared to noncomplex features; and (3) presented a visuo-motor flexibility depending on the played complexity, being the only group to exhibit a shorter eye-hand span when playing accidentals or distant notes compared to noncomplex features. Overall, this study highlights the usefulness of local analyses as a relevant tool to investigate foveal and parafoveal processing skills during music reading.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9316','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9316\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1177\/17470218251325245\" title=\"Follow DOI:10.1177\/17470218251325245\" target=\"_blank\">doi:10.1177\/17470218251325245<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9316','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Hame Park; Ayelet Arazi; Bharath Chandra Talluri; Marco Celotto; Stefano Panzeri; Alan A. Stocker; Tobias H. Donner<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9122','tp_abstract')\" style=\"cursor:pointer;\">Confirmation bias through selective readout of information encoded in human parietal cortex<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201315, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9122\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9122','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9122\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9122','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9122\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9122','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9122\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Park2025,<br \/>\r\ntitle = {Confirmation bias through selective readout of information encoded in human parietal cortex},<br \/>\r\nauthor = {Hame Park and Ayelet Arazi and Bharath Chandra Talluri and Marco Celotto and Stefano Panzeri and Alan A. Stocker and Tobias H. Donner},<br \/>\r\ndoi = {10.1038\/s41467-025-61010-x},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201315},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Decision-makers often process new evidence selectively, depending on their current beliefs about the world. We asked whether such confirmation biases result from biases in wthe encoding of sensory evidence in the brain, or alternatively in the utilization of encoded evidence for behavior. Human participants estimated the source of a sequence of visual-spatial evidence samples while we measured cortical population activity with magnetoencephalography. Halfway through the sequence, participants were prompted to judge the more likely source category. We find that processing of subsequent evidence depends on its consistency with the previously chosen category. Evidence encoded in parietal cortex contributes more to the estimation report when that evidence is consistent with the previous choice compared to when it contradicts that choice. Our results indicate that information contradicting pre-existing beliefs has little impact on subsequent behavior, despite being precisely encoded in the brain. This provides room for deliberative control to counteract confirmation biases.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9122','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9122\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Decision-makers often process new evidence selectively, depending on their current beliefs about the world. We asked whether such confirmation biases result from biases in wthe encoding of sensory evidence in the brain, or alternatively in the utilization of encoded evidence for behavior. Human participants estimated the source of a sequence of visual-spatial evidence samples while we measured cortical population activity with magnetoencephalography. Halfway through the sequence, participants were prompted to judge the more likely source category. We find that processing of subsequent evidence depends on its consistency with the previously chosen category. Evidence encoded in parietal cortex contributes more to the estimation report when that evidence is consistent with the previous choice compared to when it contradicts that choice. Our results indicate that information contradicting pre-existing beliefs has little impact on subsequent behavior, despite being precisely encoded in the brain. This provides room for deliberative control to counteract confirmation biases.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9122','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9122\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-61010-x\" title=\"Follow DOI:10.1038\/s41467-025-61010-x\" target=\"_blank\">doi:10.1038\/s41467-025-61010-x<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9122','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\">Elisabet Par\u00e9s-Pujolr\u00e0s; Simon P. Kelly; Peter R. Murphy<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('9116','tp_abstract')\" style=\"cursor:pointer;\">Dissociable encoding of evolving beliefs and momentary belief updates in distinct neural decision signals<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Communications, <\/span><span class=\"tp_pub_additional_volume\">vol. 16, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201314, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_9116\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9116','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_9116\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9116','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9116\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9116','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9116\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{ParesPujolras2025,<br \/>\r\ntitle = {Dissociable encoding of evolving beliefs and momentary belief updates in distinct neural decision signals},<br \/>\r\nauthor = {Elisabet Par\u00e9s-Pujolr\u00e0s and Simon P. Kelly and Peter R. Murphy},<br \/>\r\ndoi = {10.1038\/s41467-025-58861-9},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-12-01},<br \/>\r\njournal = {Nature Communications},<br \/>\r\nvolume = {16},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201314},<br \/>\r\npublisher = {Nature Research},<br \/>\r\nabstract = {Making accurate decisions in noisy environments requires integrating evidence over time. Studies of simple perceptual decisions in static environments have identified two human neurophysiological signals that evolve with similar integration dynamics, with one - the centroparietal positivity - appearing to compute the running integral and continuously feed it to the other - motor beta lateralisation. However, it remains unknown whether and how these signals serve more distinct functional roles in more complex scenarios. Here, we use a volatile expanded judgement task that dissociates raw sensory information, belief updates, and the evolving belief itself. We find that motor beta lateralisation traces the evolving belief across stimuli, while the centroparietal positivity locally encodes the belief updates associated with each individual stimulus. These results suggest a flexible computational hierarchy where context-dependent belief updates can be computed sample-by-sample at an intermediate processing level to modify downstream belief representations for protracted decisions about discrete stimuli.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9116','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_9116\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Making accurate decisions in noisy environments requires integrating evidence over time. Studies of simple perceptual decisions in static environments have identified two human neurophysiological signals that evolve with similar integration dynamics, with one - the centroparietal positivity - appearing to compute the running integral and continuously feed it to the other - motor beta lateralisation. However, it remains unknown whether and how these signals serve more distinct functional roles in more complex scenarios. Here, we use a volatile expanded judgement task that dissociates raw sensory information, belief updates, and the evolving belief itself. We find that motor beta lateralisation traces the evolving belief across stimuli, while the centroparietal positivity locally encodes the belief updates associated with each individual stimulus. These results suggest a flexible computational hierarchy where context-dependent belief updates can be computed sample-by-sample at an intermediate processing level to modify downstream belief representations for protracted decisions about discrete stimuli.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9116','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9116\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41467-025-58861-9\" title=\"Follow DOI:10.1038\/s41467-025-58861-9\" target=\"_blank\">doi:10.1038\/s41467-025-58861-9<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9116','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><\/table><div class=\"tablenav\"><div class=\"tablenav-pages\"><span class=\"displaying-num\">8709 entries<\/span> <a class=\"page-numbers button disabled\">&laquo;<\/a> <a class=\"page-numbers button disabled\">&lsaquo;<\/a> 1 of 88 <a href=\"https:\/\/www.sr-research.com\/zh\/cognitive-publications\/?limit=2&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=\" title=\"next page\" class=\"page-numbers button\">&rsaquo;<\/a> <a href=\"https:\/\/www.sr-research.com\/zh\/cognitive-publications\/?limit=88&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=\" title=\"last page\" class=\"page-numbers button\">&raquo;<\/a> <\/div><\/div><\/div>\n\n\n","protected":false},"excerpt":{"rendered":"<p>Cognitive Eye-Tracking Publications&nbsp; All EyeLink eye tracker cognitive and perception eye tracker research publications up until 2025 (with some early 2026s) are listed below by year. You can search the eye-tracking publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any &hellip;<\/p>","protected":false},"author":3,"featured_media":0,"parent":0,"menu_order":4,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"class_list":{"1":"page","2":"type-page","5":"entry"},"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.5 (Yoast SEO v27.5) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Cognitive Eye-Tracking Publications - SR Research<\/title>\n<meta name=\"description\" content=\"This is a list of eye-tracking cognitive publications using EyeLink eye trackers. These publications are solely peer-reviewed journal articles.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.sr-research.com\/zh\/cognitive-publications\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"EyeLink Eye Trackers in Cognitive Publications\" \/>\n<meta property=\"og:description\" content=\"This is a list of eye-tracking cognitive publications using EyeLink eye trackers. These publications are solely peer-reviewed journal articles.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.sr-research.com\/zh\/cognitive-publications\/\" \/>\n<meta property=\"og:site_name\" content=\"Fast, Accurate, Reliable Eye Tracking\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/SR-Research-Ltd-640093842854433\/\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-23T18:47:07+00:00\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@SRResearchLtd\" \/>\n<meta name=\"twitter:label1\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data1\" content=\"1 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/cognitive-publications\\\/\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/cognitive-publications\\\/\",\"name\":\"Cognitive Eye-Tracking Publications - SR Research\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#website\"},\"datePublished\":\"2017-07-17T03:54:21+00:00\",\"dateModified\":\"2026-02-23T18:47:07+00:00\",\"description\":\"This is a list of eye-tracking cognitive publications using EyeLink eye trackers. These publications are solely peer-reviewed journal articles.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/cognitive-publications\\\/#breadcrumb\"},\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.sr-research.com\\\/cognitive-publications\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/cognitive-publications\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.sr-research.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"EyeLink Eye Trackers in Cognitive Publications\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#website\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/\",\"name\":\"Fast, Accurate, Reliable Eye Tracking\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.sr-research.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"zh-Hans\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#organization\",\"name\":\"SR Research Ltd.\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/wp-content\\\/uploads\\\/2017\\\/12\\\/sr-research-logo-square.jpg\",\"contentUrl\":\"https:\\\/\\\/www.sr-research.com\\\/wp-content\\\/uploads\\\/2017\\\/12\\\/sr-research-logo-square.jpg\",\"width\":512,\"height\":512,\"caption\":\"SR Research Ltd.\"},\"image\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/SR-Research-Ltd-640093842854433\\\/\",\"https:\\\/\\\/x.com\\\/SRResearchLtd\",\"https:\\\/\\\/www.instagram.com\\\/srresearchltd\\\/\",\"https:\\\/\\\/ca.linkedin.com\\\/company\\\/sr-research-ltd\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UCCfE1oJHk4WLe9h30AcNOJg\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Cognitive Eye-Tracking Publications - SR Research","description":"This is a list of eye-tracking cognitive publications using EyeLink eye trackers. These publications are solely peer-reviewed journal articles.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.sr-research.com\/zh\/cognitive-publications\/","og_locale":"zh_CN","og_type":"article","og_title":"EyeLink Eye Trackers in Cognitive Publications","og_description":"This is a list of eye-tracking cognitive publications using EyeLink eye trackers. These publications are solely peer-reviewed journal articles.","og_url":"https:\/\/www.sr-research.com\/zh\/cognitive-publications\/","og_site_name":"Fast, Accurate, Reliable Eye Tracking","article_publisher":"https:\/\/www.facebook.com\/SR-Research-Ltd-640093842854433\/","article_modified_time":"2026-02-23T18:47:07+00:00","twitter_card":"summary_large_image","twitter_site":"@SRResearchLtd","twitter_misc":{"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"1 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.sr-research.com\/cognitive-publications\/","url":"https:\/\/www.sr-research.com\/cognitive-publications\/","name":"Cognitive Eye-Tracking Publications - SR Research","isPartOf":{"@id":"https:\/\/www.sr-research.com\/#website"},"datePublished":"2017-07-17T03:54:21+00:00","dateModified":"2026-02-23T18:47:07+00:00","description":"This is a list of eye-tracking cognitive publications using EyeLink eye trackers. These publications are solely peer-reviewed journal articles.","breadcrumb":{"@id":"https:\/\/www.sr-research.com\/cognitive-publications\/#breadcrumb"},"inLanguage":"zh-Hans","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.sr-research.com\/cognitive-publications\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.sr-research.com\/cognitive-publications\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.sr-research.com\/"},{"@type":"ListItem","position":2,"name":"EyeLink Eye Trackers in Cognitive Publications"}]},{"@type":"WebSite","@id":"https:\/\/www.sr-research.com\/#website","url":"https:\/\/www.sr-research.com\/","name":"\u9ad8\u901f\u3001\u7cbe\u51c6\u548c\u53ef\u9760\u7684\u773c\u52a8\u8ffd\u8e2a\u89e3\u51b3\u65b9\u6848","description":"","publisher":{"@id":"https:\/\/www.sr-research.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.sr-research.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"zh-Hans"},{"@type":"Organization","@id":"https:\/\/www.sr-research.com\/#organization","name":"SR Research Ltd.","url":"https:\/\/www.sr-research.com\/","logo":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/www.sr-research.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.sr-research.com\/wp-content\/uploads\/2017\/12\/sr-research-logo-square.jpg","contentUrl":"https:\/\/www.sr-research.com\/wp-content\/uploads\/2017\/12\/sr-research-logo-square.jpg","width":512,"height":512,"caption":"SR Research Ltd."},"image":{"@id":"https:\/\/www.sr-research.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/SR-Research-Ltd-640093842854433\/","https:\/\/x.com\/SRResearchLtd","https:\/\/www.instagram.com\/srresearchltd\/","https:\/\/ca.linkedin.com\/company\/sr-research-ltd","https:\/\/www.youtube.com\/channel\/UCCfE1oJHk4WLe9h30AcNOJg"]}]}},"acf":[],"_links":{"self":[{"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/pages\/167","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/comments?post=167"}],"version-history":[{"count":41,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/pages\/167\/revisions"}],"predecessor-version":[{"id":34422,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/pages\/167\/revisions\/34422"}],"wp:attachment":[{"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/media?parent=167"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}