{"id":15792,"date":"2020-02-24T11:43:52","date_gmt":"2020-02-24T16:43:52","guid":{"rendered":"https:\/\/www.sr-research.com\/?p=15792"},"modified":"2026-03-25T11:34:51","modified_gmt":"2026-03-25T16:34:51","slug":"highly-cited-eyelink-articles","status":"publish","type":"post","link":"https:\/\/www.sr-research.com\/zh\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/","title":{"rendered":"\u88ab\u9ad8\u5ea6\u5f15\u7528\u7684\u773c\u7ebf\u6587\u7ae0"},"content":{"rendered":"<div class=\"mai-columns has-xl-margin-bottom\"><div class=\"mai-columns-wrap has-columns\" style=\"--column-gap:var(--spacing-xl);--row-gap:var(--spacing-xl);--align-columns:start;--align-columns-vertical:initial;\">\n<div class=\"mai-column is-column\" style=\"--columns-xs:1\/1;--flex-xs:0 0 var(--flex-basis);--columns-sm:1\/1;--flex-sm:0 0 var(--flex-basis);--columns-md:1\/1;--flex-md:0 0 var(--flex-basis);--columns-lg:1\/1;--flex-lg:0 0 var(--flex-basis);--justify-content:start;\">\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1800\" height=\"500\" src=\"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019.jpg\" alt=\"EyeLink Eye Tracker Publications\" class=\"wp-image-16083\" srcset=\"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-18x5.jpg 18w, https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-300x83.jpg 300w, https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-768x213.jpg 768w, https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-1024x284.jpg 1024w, https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-1536x427.jpg 1536w, https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019.jpg 1800w\" sizes=\"auto, (max-width: 1800px) 100vw, 1800px\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n<p class=\"yoast-reading-time__wrapper\"><span class=\"yoast-reading-time__icon\"><svg aria-hidden=\"true\" focusable=\"false\" data-icon=\"clock\" width=\"20\" height=\"20\" fill=\"none\" stroke=\"currentColor\" style=\"display:inline-block;vertical-align:-0.1em\" role=\"img\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 24 24\"><path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M12 8v4l3 3m6-3a9 9 0 11-18 0 9 9 0 0118 0z\"><\/path><\/svg><\/span><span class=\"yoast-reading-time__spacer\" style=\"display:inline-block;width:1em\"><\/span><span class=\"yoast-reading-time__descriptive-text\">Estimated reading time: <\/span><span class=\"yoast-reading-time__reading-time\">3<\/span><span class=\"yoast-reading-time__time-unit\"> \u5206\u949f<\/span><\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>We have recently finished updating our <a href=\"https:\/\/www.sr-research.com\/eye-tracking-publications-list\/\"><strong>database of EyeLink publications<\/strong><\/a> &#8211; there were more than 900 papers published in 2019 alone, and the database now contains well over 8000 publications in total. Each publication is checked individually to ensure that it contains data collected using an EyeLink eye tracker (rather than just referring to data collected with an EyeLink, as in a meta-analysis or review article) and that the research is published in a peer-reviewed journal.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-publications-by-year\">Publications by Year<\/h2>\n\n\n\n<p>In a <a href=\"https:\/\/www.sr-research.com\/eye-tracking-blog\/news\/eyelink-publications\/\"><strong> previous blog<\/strong><\/a> I plotted the number of publications per year and an updated version of that plot is included below:<\/p>\n\n\n<div class=\"mai-columns has-xl-margin-top has-xl-margin-bottom\"><div class=\"mai-columns-wrap has-columns\" style=\"--column-gap:var(--spacing-xl);--row-gap:var(--spacing-xl);--align-columns:start;--align-columns-vertical:initial;\">\n<div class=\"mai-column is-column\" style=\"--columns-xs:1\/1;--flex-xs:0 0 var(--flex-basis);--columns-sm:1\/1;--flex-sm:0 0 var(--flex-basis);--columns-md:1\/1;--flex-md:0 0 var(--flex-basis);--columns-lg:1\/1;--flex-lg:0 0 var(--flex-basis);--justify-content:start;\">\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"738\" height=\"497\" src=\"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/EyeLink_Publications_2019.png\" alt=\"EyeLink Eye Tracking Publications 2019\" class=\"wp-image-15805\" srcset=\"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/EyeLink_Publications_2019-18x12.png 18w, https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/EyeLink_Publications_2019-300x202.png 300w, https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/EyeLink_Publications_2019.png 738w\" sizes=\"auto, (max-width: 738px) 100vw, 738px\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-highly-cited-eyelink-publications\">Highly Cited EyeLink Publications<\/h2>\n\n\n\n<p>The earlier blog also listed the &#8220;top&#8221; journals for EyeLink publications &#8211; both with respect to the number of EyeLink articles and with respect to the journal&#8217;s impact factor. This year I thought it might be interesting to list some of the most highly cited articles in our database. Determining citation counts is a somewhat inexact science. There are three main sources of information on article citation counts &#8211; Web of Science, Scopus and Google Scholar. While the advantages and disadvantages of each of these sources is a topic of lively debate (Harzing has written extensively on this &#8211; see e.g. <a href=\"https:\/\/harzing.com\/publications\/white-papers\/google-scholar-a-new-data-source-for-citation-analysis\/\" target=\"_blank\" rel=\"noreferrer noopener\"> <strong>this blog<\/strong><\/a>), Google Scholar has the twin advantages of having a very comprehensive coverage and being freely accessible.<\/p>\n\n\n\n<p>The list below is a selection of 15 EyeLink articles, all of which have citation counts &gt;500 according to Google Scholar. The list was generated by searching the top 20 journals by volume of EyeLink articles, and the top 10 journals by Impact Factor in our database. It is not intended to be exhaustive, and the articles are listed in no particular order. I think the list provides a fascinating illustration of the sheer breadth (and enormous impact) of the research that EyeLink eye trackers have been involved in.<\/p>\n\n\n<div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><table class=\"teachpress_publication_list\"><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> D'souza, Joanita F.;  Rich, Jessima M.;  Cloherty, Shaun L.;  Price, Nicholas S. C.;  Hagan, Maureen A.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('2466','tp_links')\" style=\"cursor:pointer;\">Topographic organization of saccade-related response field properties in the marmoset posterior parietal cortex<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">eNeuro, <\/span><span class=\"tp_pub_additional_volume\">vol. 12, <\/span><span class=\"tp_pub_additional_number\">no. 10, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201312, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_2466\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2466','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_2466\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2466','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_2466\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2466','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_2466\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Dsouza2025,<br \/>\r\ntitle = {Topographic organization of saccade-related response field properties in the marmoset posterior parietal cortex},<br \/>\r\nauthor = {Joanita F. D'souza and Jessima M. Rich and Shaun L. Cloherty and Nicholas S. C. Price and Maureen A. Hagan},<br \/>\r\ndoi = {10.1523\/ENEURO.0287-25.2025},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-10-01},<br \/>\r\njournal = {eNeuro},<br \/>\r\nvolume = {12},<br \/>\r\nnumber = {10},<br \/>\r\npages = {1\u201312},<br \/>\r\npublisher = {Society for Neuroscience},<br \/>\r\nabstract = {Despite various histological, electrophysiological, and imaging studies, the topographic organization of saccade-related activity in the posterior parietal cortex (PPC) has been notoriously difficult to characterize. In part, this is because areas of interest in PPC are often embedded deep in sulci in macaques and humans. Understanding the extent of topographic organization in PPC can provide insights into the computation contributions of PPC. The lissencephalic cortex of the common marmoset offers a unique opportunity to investigate fine-scale topographic organization in PPC. Recordings were obtained from the PPC of two male marmosets performing a visually guided center-out saccade task with 8 or 36 peripheral targets using multichannel electrode arrays with 100 \u03bcm spacing. By plotting the pattern of saccade direction tuning preferences across all penetrations and cortical depths, we uncovered topographic organizational features within the PPC. Like other primates, multiunits in marmoset PPC tend to prefer saccade targets in the contralateral visual field. The results detail how preference for saccadic direction changes in a systematic manner across cortical distance, such that response units closer in proximity tend to show systematic changes in their tuning preferences. Across cortical distance, the visual field was also systematically encoded but reversals in direction varied across penetrations. The analysis highlights the likelihood of multiple representations of the visual field for saccade direction preference across PPC. These novel findings suggest a possible functional organization of saccade-related activity in marmoset PPC, giving insights into the computational capacity of the PPC.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2466','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_2466\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Despite various histological, electrophysiological, and imaging studies, the topographic organization of saccade-related activity in the posterior parietal cortex (PPC) has been notoriously difficult to characterize. In part, this is because areas of interest in PPC are often embedded deep in sulci in macaques and humans. Understanding the extent of topographic organization in PPC can provide insights into the computation contributions of PPC. The lissencephalic cortex of the common marmoset offers a unique opportunity to investigate fine-scale topographic organization in PPC. Recordings were obtained from the PPC of two male marmosets performing a visually guided center-out saccade task with 8 or 36 peripheral targets using multichannel electrode arrays with 100 \u03bcm spacing. By plotting the pattern of saccade direction tuning preferences across all penetrations and cortical depths, we uncovered topographic organizational features within the PPC. Like other primates, multiunits in marmoset PPC tend to prefer saccade targets in the contralateral visual field. The results detail how preference for saccadic direction changes in a systematic manner across cortical distance, such that response units closer in proximity tend to show systematic changes in their tuning preferences. Across cortical distance, the visual field was also systematically encoded but reversals in direction varied across penetrations. The analysis highlights the likelihood of multiple representations of the visual field for saccade direction preference across PPC. These novel findings suggest a possible functional organization of saccade-related activity in marmoset PPC, giving insights into the computational capacity of the PPC.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2466','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_2466\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1523\/ENEURO.0287-25.2025\" title=\"Follow DOI:10.1523\/ENEURO.0287-25.2025\" target=\"_blank\">doi:10.1523\/ENEURO.0287-25.2025<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2466','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Farrell, Julia;  Conte, Stefania;  Barry-Anwar, Ryan;  Scott, Lisa S.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('3393','tp_links')\" style=\"cursor:pointer;\">Face race and sex impact visual fixation strategies for upright and inverted faces in 3- to 6-year-old children<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Developmental Psychobiology, <\/span><span class=\"tp_pub_additional_volume\">vol. 65, <\/span><span class=\"tp_pub_additional_number\">no. 2, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201315, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_3393\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3393','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_3393\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3393','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_3393\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3393','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_3393\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Farrell2023,<br \/>\r\ntitle = {Face race and sex impact visual fixation strategies for upright and inverted faces in 3- to 6-year-old children},<br \/>\r\nauthor = {Julia Farrell and Stefania Conte and Ryan Barry-Anwar and Lisa S. Scott},<br \/>\r\ndoi = {10.1002\/dev.22362},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\njournal = {Developmental Psychobiology},<br \/>\r\nvolume = {65},<br \/>\r\nnumber = {2},<br \/>\r\npages = {1\u201315},<br \/>\r\nabstract = {Everyday face experience tends to be biased, such that infants and young children interact more often with own-race and female faces leading to differential processing of faces within these groups relative to others. In the present study, visual fixation strategies were recorded using eye tracking to determine the extent to which face race and sex\/gender impact a key index of face processing in 3- to 6-year-old children (n = 47). Children viewed male and female upright and inverted White and Asian faces while visual fixations were recorded. Face orientation was found to have robust effects on children's visual fixations, such that children exhibited shorter first fixation and average fixation durations and a greater number of fixations for inverted compared to upright face trials. First fixations to the eye region were also greater for upright compared to inverted faces. Fewer fixations and longer duration fixations were found for trials with male compared to female faces and for upright compared to inverted unfamiliar-race faces, but not familiar-race faces. These findings demonstrate evidence of differential fixation strategies toward different types of faces in 3- to 6-year-old chil- dren, illustrating the importance of experience in the development of visual attention to faces.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3393','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_3393\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Everyday face experience tends to be biased, such that infants and young children interact more often with own-race and female faces leading to differential processing of faces within these groups relative to others. In the present study, visual fixation strategies were recorded using eye tracking to determine the extent to which face race and sex\/gender impact a key index of face processing in 3- to 6-year-old children (n = 47). Children viewed male and female upright and inverted White and Asian faces while visual fixations were recorded. Face orientation was found to have robust effects on children's visual fixations, such that children exhibited shorter first fixation and average fixation durations and a greater number of fixations for inverted compared to upright face trials. First fixations to the eye region were also greater for upright compared to inverted faces. Fewer fixations and longer duration fixations were found for trials with male compared to female faces and for upright compared to inverted unfamiliar-race faces, but not familiar-race faces. These findings demonstrate evidence of differential fixation strategies toward different types of faces in 3- to 6-year-old chil- dren, illustrating the importance of experience in the development of visual attention to faces.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3393','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_3393\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1002\/dev.22362\" title=\"Follow DOI:10.1002\/dev.22362\" target=\"_blank\">doi:10.1002\/dev.22362<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3393','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Maith, Oliver;  Baladron, Javier;  Einh\u00e4user, Wolfgang;  Hamker, Fred H.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('7534','tp_links')\" style=\"cursor:pointer;\">Exploration behavior after reversals is predicted by STN-GPe synaptic plasticity in a basal ganglia model<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">iScience, <\/span><span class=\"tp_pub_additional_volume\">vol. 26, <\/span><span class=\"tp_pub_additional_number\">no. 5, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201323, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_7534\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7534','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_7534\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7534','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_7534\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7534','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_7534\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Maith2023,<br \/>\r\ntitle = {Exploration behavior after reversals is predicted by STN-GPe synaptic plasticity in a basal ganglia model},<br \/>\r\nauthor = {Oliver Maith and Javier Baladron and Wolfgang Einh\u00e4user and Fred H. Hamker},<br \/>\r\ndoi = {10.1016\/j.isci.2023.106599},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\njournal = {iScience},<br \/>\r\nvolume = {26},<br \/>\r\nnumber = {5},<br \/>\r\npages = {1\u201323},<br \/>\r\nabstract = {Humans can quickly adapt their behavior to changes in the environment. Classical reversal learning tasks mainly measure how well participants can disengage from a previously successful behavior but not how alternative responses are explored. Here, we propose a novel 5-choice reversal learning task with alternating position-reward contingencies to study exploration behavior after a reversal. We compare human exploratory saccade behavior with a prediction obtained from a neuro-computational model of the basal ganglia. A new synaptic plasticity rule for learning the connectivity between the subthalamic nucleus (STN) and external globus pallidus (GPe) results in exploration biases to previously rewarded positions. The model simulations and human data both show that during experimental experience exploration becomes limited to only those positions that have been rewarded in the past. Our study demonstrates how quite complex behavior may result from a simple sub-circuit within the basal ganglia pathways.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7534','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_7534\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Humans can quickly adapt their behavior to changes in the environment. Classical reversal learning tasks mainly measure how well participants can disengage from a previously successful behavior but not how alternative responses are explored. Here, we propose a novel 5-choice reversal learning task with alternating position-reward contingencies to study exploration behavior after a reversal. We compare human exploratory saccade behavior with a prediction obtained from a neuro-computational model of the basal ganglia. A new synaptic plasticity rule for learning the connectivity between the subthalamic nucleus (STN) and external globus pallidus (GPe) results in exploration biases to previously rewarded positions. The model simulations and human data both show that during experimental experience exploration becomes limited to only those positions that have been rewarded in the past. Our study demonstrates how quite complex behavior may result from a simple sub-circuit within the basal ganglia pathways.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7534','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_7534\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.isci.2023.106599\" title=\"Follow DOI:10.1016\/j.isci.2023.106599\" target=\"_blank\">doi:10.1016\/j.isci.2023.106599<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7534','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Barretto-Garc\u00eda, Miguel;  Hollander, Gilles;  Grueschow, Marcus;  Polan\u00eda, Rafael;  Woodford, Michael;  Ruff, Christian C.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('678','tp_links')\" style=\"cursor:pointer;\">Individual risk attitudes arise from noise in neurocognitive magnitude representations<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Human Behaviour, <\/span><span class=\"tp_pub_additional_volume\">vol. 7, <\/span><span class=\"tp_pub_additional_number\">no. 9, <\/span><span class=\"tp_pub_additional_pages\">pp. 1551\u20131567, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_678\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('678','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_678\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('678','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_678\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('678','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_678\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{BarrettoGarcia2023,<br \/>\r\ntitle = {Individual risk attitudes arise from noise in neurocognitive magnitude representations},<br \/>\r\nauthor = {Miguel Barretto-Garc\u00eda and Gilles Hollander and Marcus Grueschow and Rafael Polan\u00eda and Michael Woodford and Christian C. Ruff},<br \/>\r\ndoi = {10.1038\/s41562-023-01643-4},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\njournal = {Nature Human Behaviour},<br \/>\r\nvolume = {7},<br \/>\r\nnumber = {9},<br \/>\r\npages = {1551\u20131567},<br \/>\r\npublisher = {Springer US},<br \/>\r\nabstract = {Humans are generally risk averse, preferring smaller certain over larger uncertain outcomes. Economic theories usually explain this by assuming concave utility functions. Here, we provide evidence that risk aversion can also arise from relative underestimation of larger monetary payoffs, a perceptual bias rooted in the noisy logarithmic coding of numerical magnitudes. We confirmed this with psychophysics and functional magnetic resonance imaging, by measuring behavioural and neural acuity of magnitude representations during a magnitude perception task and relating these measures to risk attitudes during separate risky financial decisions. Computational modelling indicated that participants use similar mental magnitude representations in both tasks, with correlated precision across perceptual and risky choices. Participants with more precise magnitude representations in parietal cortex showed less variable behaviour and less risk aversion. Our results highlight that at least some individual characteristics of economic behaviour can reflect capacity limitations in perceptual processing rather than processes that assign subjective values to monetary outcomes.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('678','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_678\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Humans are generally risk averse, preferring smaller certain over larger uncertain outcomes. Economic theories usually explain this by assuming concave utility functions. Here, we provide evidence that risk aversion can also arise from relative underestimation of larger monetary payoffs, a perceptual bias rooted in the noisy logarithmic coding of numerical magnitudes. We confirmed this with psychophysics and functional magnetic resonance imaging, by measuring behavioural and neural acuity of magnitude representations during a magnitude perception task and relating these measures to risk attitudes during separate risky financial decisions. Computational modelling indicated that participants use similar mental magnitude representations in both tasks, with correlated precision across perceptual and risky choices. Participants with more precise magnitude representations in parietal cortex showed less variable behaviour and less risk aversion. Our results highlight that at least some individual characteristics of economic behaviour can reflect capacity limitations in perceptual processing rather than processes that assign subjective values to monetary outcomes.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('678','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_678\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41562-023-01643-4\" title=\"Follow DOI:10.1038\/s41562-023-01643-4\" target=\"_blank\">doi:10.1038\/s41562-023-01643-4<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('678','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Meirhaeghe, Nicolas;  Sohn, Hansem;  Jazayeri, Mehrdad<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('7940','tp_links')\" style=\"cursor:pointer;\">A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neuron, <\/span><span class=\"tp_pub_additional_volume\">vol. 109, <\/span><span class=\"tp_pub_additional_number\">no. 18, <\/span><span class=\"tp_pub_additional_pages\">pp. 2995\u20133011.e5, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_7940\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7940','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_7940\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7940','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_7940\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7940','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_7940\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Meirhaeghe2021,<br \/>\r\ntitle = {A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex},<br \/>\r\nauthor = {Nicolas Meirhaeghe and Hansem Sohn and Mehrdad Jazayeri},<br \/>\r\ndoi = {10.1016\/j.neuron.2021.08.025},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-01-01},<br \/>\r\njournal = {Neuron},<br \/>\r\nvolume = {109},<br \/>\r\nnumber = {18},<br \/>\r\npages = {2995\u20133011.e5},<br \/>\r\npublisher = {Elsevier Inc.},<br \/>\r\nabstract = {The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7940','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_7940\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7940','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_7940\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.neuron.2021.08.025\" title=\"Follow DOI:10.1016\/j.neuron.2021.08.025\" target=\"_blank\">doi:10.1016\/j.neuron.2021.08.025<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7940','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Kozak, Anna;  Wieteska, Micha\u0142;  Ninghetto, Marco;  Szulborski, Kamil;  Ga\u0142ecki, Tomasz;  Szaflik, Jacek;  Burnat, Kalina<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('6365','tp_links')\" style=\"cursor:pointer;\">Motion-based acuity task: Full visual field measurement of shape and motion perception<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Translational Vision Science &amp; Technology, <\/span><span class=\"tp_pub_additional_volume\">vol. 10, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 9, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_6365\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6365','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_6365\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6365','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_6365\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6365','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_6365\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Kozak2021,<br \/>\r\ntitle = {Motion-based acuity task: Full visual field measurement of shape and motion perception},<br \/>\r\nauthor = {Anna Kozak and Micha\u0142 Wieteska and Marco Ninghetto and Kamil Szulborski and Tomasz Ga\u0142ecki and Jacek Szaflik and Kalina Burnat},<br \/>\r\ndoi = {10.1167\/tvst.10.1.9},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-01-01},<br \/>\r\njournal = {Translational Vision Science & Technology},<br \/>\r\nvolume = {10},<br \/>\r\nnumber = {1},<br \/>\r\npages = {9},<br \/>\r\nabstract = {Purpose: Damage of retinal representation of the visual field affects its local features and the spared, unaffected parts. Measurements of visual deficiencies in ophthalmological patients are separated for central (shape) or peripheral (motion and space perception) properties, and acuity tasks rely on stationary stimuli. We explored the benefit of measuring shape and motion perception simultaneously using a new motion-based acuity task. Methods: Eight healthy control subjects, three patients with retinitis pigmentosa (RP; tunnel vision), and 2 patients with Stargardt disease (STGD) juvenile macular degeneration were included. To model the peripheral loss, we narrowed the visual field in controls to 10 degrees. Negative and positive contrast of motion signals were tested in random-dot kinematograms (RDKs), where shapes were separated from the background by the motion of dots based on coherence, direction, or velocity. The task was to distinguish a circle from an ellipse. The difficulty of the task increased as ellipse became more circular until reaching the acuity limit. Results: High velocity, negative contrast was more difficult for all, and for patients with STGD, it was too difficult to participate. A slower velocity improved acuity for all participants. Conclusions: Proposed acuity testing not only allows for the full assessment of vision but also advances the capability of standard testing with the potential to detect spare visual functions. Translational Relevance: The motion-based acuity task might be a practical tool for assessing vision loss and revealing undetected, undamaged, or strengthened properties of the injured visual system by standard testing, as suggested here for two patients with STGD and three patients with RP.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6365','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_6365\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Purpose: Damage of retinal representation of the visual field affects its local features and the spared, unaffected parts. Measurements of visual deficiencies in ophthalmological patients are separated for central (shape) or peripheral (motion and space perception) properties, and acuity tasks rely on stationary stimuli. We explored the benefit of measuring shape and motion perception simultaneously using a new motion-based acuity task. Methods: Eight healthy control subjects, three patients with retinitis pigmentosa (RP; tunnel vision), and 2 patients with Stargardt disease (STGD) juvenile macular degeneration were included. To model the peripheral loss, we narrowed the visual field in controls to 10 degrees. Negative and positive contrast of motion signals were tested in random-dot kinematograms (RDKs), where shapes were separated from the background by the motion of dots based on coherence, direction, or velocity. The task was to distinguish a circle from an ellipse. The difficulty of the task increased as ellipse became more circular until reaching the acuity limit. Results: High velocity, negative contrast was more difficult for all, and for patients with STGD, it was too difficult to participate. A slower velocity improved acuity for all participants. Conclusions: Proposed acuity testing not only allows for the full assessment of vision but also advances the capability of standard testing with the potential to detect spare visual functions. Translational Relevance: The motion-based acuity task might be a practical tool for assessing vision loss and revealing undetected, undamaged, or strengthened properties of the injured visual system by standard testing, as suggested here for two patients with STGD and three patients with RP.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6365','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_6365\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1167\/tvst.10.1.9\" title=\"Follow DOI:10.1167\/tvst.10.1.9\" target=\"_blank\">doi:10.1167\/tvst.10.1.9<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6365','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Giesel, Martin;  Yakovleva, Alexandra;  Bloj, Marina;  Wade, Alex R.;  Norcia, Anthony M.;  Harris, Julie M.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('3974','tp_links')\" style=\"cursor:pointer;\">Relative contributions to vergence eye movements of two binocular cues for motion-in-depth<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 9, <\/span><span class=\"tp_pub_additional_pages\">pp. 17412, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_3974\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3974','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_3974\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3974','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_3974\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3974','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_3974\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Giesel2019,<br \/>\r\ntitle = {Relative contributions to vergence eye movements of two binocular cues for motion-in-depth},<br \/>\r\nauthor = {Martin Giesel and Alexandra Yakovleva and Marina Bloj and Alex R. Wade and Anthony M. Norcia and Julie M. Harris},<br \/>\r\ndoi = {10.1038\/s41598-019-53902-y},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-01-01},<br \/>\r\njournal = {Scientific Reports},<br \/>\r\nvolume = {9},<br \/>\r\npages = {17412},<br \/>\r\nabstract = {When we track an object moving in depth, our eyes rotate in opposite directions. This type of \u201cdisjunctive\u201d eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD\/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3974','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_3974\" style=\"display:none;\"><div class=\"tp_abstract_entry\">When we track an object moving in depth, our eyes rotate in opposite directions. This type of \u201cdisjunctive\u201d eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD\/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3974','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_3974\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-019-53902-y\" title=\"Follow DOI:10.1038\/s41598-019-53902-y\" target=\"_blank\">doi:10.1038\/s41598-019-53902-y<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3974','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Costela, Francisco M.;  McCamy, Michael B.;  Coffelt, Mary;  Otero-Millan, Jorge;  Macknik, Stephen L.;  Martinez-Conde, Susana<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('2292','tp_links')\" style=\"cursor:pointer;\">Changes in visibility as a function of spatial frequency and microsaccade occurrence<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">European Journal of Neuroscience, <\/span><span class=\"tp_pub_additional_volume\">vol. 45, <\/span><span class=\"tp_pub_additional_number\">no. 3, <\/span><span class=\"tp_pub_additional_pages\">pp. 433\u2013439, <\/span><span class=\"tp_pub_additional_year\">2017<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_2292\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2292','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_2292\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2292','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_2292\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2292','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_2292\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Costela2017,<br \/>\r\ntitle = {Changes in visibility as a function of spatial frequency and microsaccade occurrence},<br \/>\r\nauthor = {Francisco M. Costela and Michael B. McCamy and Mary Coffelt and Jorge Otero-Millan and Stephen L. Macknik and Susana Martinez-Conde},<br \/>\r\ndoi = {10.1111\/ejn.13487},<br \/>\r\nyear  = {2017},<br \/>\r\ndate = {2017-01-01},<br \/>\r\njournal = {European Journal of Neuroscience},<br \/>\r\nvolume = {45},<br \/>\r\nnumber = {3},<br \/>\r\npages = {433\u2013439},<br \/>\r\nabstract = {Fixational eye movements (FEM), including microsaccades, drift, and tremor, shift our eye position during ocular fixation, producing retinal motion that is thought to help visibility by counteracting neural adaptation to unchanging stimulation. Yet, how each FEM type influences this process is still debated. Recent studies found little to no relationship between microsaccades and visual perception of spatial frequencies (SF), and concluded that any effects microsaccades may have on vision do not extend to the SF domain. However, these conclusions were based on coarse analyses that make it hard to appreciate the actual effects of microsaccades on target visibility as a function of SF. Thus, how microsaccades contribute to the visibility of stimuli of different SFs remains unclear. Here we asked how the visibility of targets of various SFs changed over time, in relationship with concurrent microsaccade production. Participants continuously reported on changes in target visibility, allowing us to time-lock ongoing changes in microsaccade parameters to perceptual transitions in visibility. Microsaccades restored\/increased the visibility of low SF targets more efficiently than that of high SF targets. Yet, microsaccade rates rose before periods of increased visibility, and dropped before periods of diminished visibility, suggesting that microsaccades boosted target visibility across a wide range of SFs. Our data also indicate that visual stimuli fade\/become harder to see less often in the presence of microsaccades. In addition, larger microsaccades restored\/increased target visibility more effectively than smaller microsaccades. These combined results support the proposal that microsaccades enhance visibility across a broad variety of SFs.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2292','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_2292\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Fixational eye movements (FEM), including microsaccades, drift, and tremor, shift our eye position during ocular fixation, producing retinal motion that is thought to help visibility by counteracting neural adaptation to unchanging stimulation. Yet, how each FEM type influences this process is still debated. Recent studies found little to no relationship between microsaccades and visual perception of spatial frequencies (SF), and concluded that any effects microsaccades may have on vision do not extend to the SF domain. However, these conclusions were based on coarse analyses that make it hard to appreciate the actual effects of microsaccades on target visibility as a function of SF. Thus, how microsaccades contribute to the visibility of stimuli of different SFs remains unclear. Here we asked how the visibility of targets of various SFs changed over time, in relationship with concurrent microsaccade production. Participants continuously reported on changes in target visibility, allowing us to time-lock ongoing changes in microsaccade parameters to perceptual transitions in visibility. Microsaccades restored\/increased the visibility of low SF targets more efficiently than that of high SF targets. Yet, microsaccade rates rose before periods of increased visibility, and dropped before periods of diminished visibility, suggesting that microsaccades boosted target visibility across a wide range of SFs. Our data also indicate that visual stimuli fade\/become harder to see less often in the presence of microsaccades. In addition, larger microsaccades restored\/increased target visibility more effectively than smaller microsaccades. These combined results support the proposal that microsaccades enhance visibility across a broad variety of SFs.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2292','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_2292\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1111\/ejn.13487\" title=\"Follow DOI:10.1111\/ejn.13487\" target=\"_blank\">doi:10.1111\/ejn.13487<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2292','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Azarian, Bobby;  Esser, Elizabeth G.;  Peterson, Matthew S.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('515','tp_links')\" style=\"cursor:pointer;\">Watch out! Directional threat-related postures cue attention and the eyes<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognition and Emotion, <\/span><span class=\"tp_pub_additional_volume\">vol. 30, <\/span><span class=\"tp_pub_additional_number\">no. 3, <\/span><span class=\"tp_pub_additional_pages\">pp. 561\u2013569, <\/span><span class=\"tp_pub_additional_year\">2016<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_515\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('515','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_515\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('515','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_515\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('515','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_515\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Azarian2016a,<br \/>\r\ntitle = {Watch out! Directional threat-related postures cue attention and the eyes},<br \/>\r\nauthor = {Bobby Azarian and Elizabeth G. Esser and Matthew S. Peterson},<br \/>\r\ndoi = {10.1080\/02699931.2015.1013089},<br \/>\r\nyear  = {2016},<br \/>\r\ndate = {2016-01-01},<br \/>\r\njournal = {Cognition and Emotion},<br \/>\r\nvolume = {30},<br \/>\r\nnumber = {3},<br \/>\r\npages = {561\u2013569},<br \/>\r\npublisher = {Taylor & Francis},<br \/>\r\nabstract = {Previous work indicates that threatening facial expressions with averted eye gaze can act as a signal of imminent danger, enhancing attentional orienting in the gazed-at direction. However, this threat-related gaze-cueing effect is only present in individuals reporting high levels of anxiety. The present study used eye tracking to investigate whether additional directional social cues, such as averted angry and fearful human body postures, not only cue attention, but also the eyes. The data show that although body direction did not predict target location, anxious individuals made faster eye movements when fearful or angry postures were facing towards (congruent condition) rather than away (incongruent condition) from peripheral targets. Our results provide evidence for attentional cueing in response to threat-related directional body postures in those with anxiety. This suggests that for such individuals, attention is guided by threatening social stimuli in ways that can influence and bias eye movement behaviour.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('515','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_515\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Previous work indicates that threatening facial expressions with averted eye gaze can act as a signal of imminent danger, enhancing attentional orienting in the gazed-at direction. However, this threat-related gaze-cueing effect is only present in individuals reporting high levels of anxiety. The present study used eye tracking to investigate whether additional directional social cues, such as averted angry and fearful human body postures, not only cue attention, but also the eyes. The data show that although body direction did not predict target location, anxious individuals made faster eye movements when fearful or angry postures were facing towards (congruent condition) rather than away (incongruent condition) from peripheral targets. Our results provide evidence for attentional cueing in response to threat-related directional body postures in those with anxiety. This suggests that for such individuals, attention is guided by threatening social stimuli in ways that can influence and bias eye movement behaviour.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('515','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_515\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1080\/02699931.2015.1013089\" title=\"Follow DOI:10.1080\/02699931.2015.1013089\" target=\"_blank\">doi:10.1080\/02699931.2015.1013089<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('515','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Chen, Lijing;  Yang, Yufang<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('1884','tp_links')\" style=\"cursor:pointer;\">Emphasizing the only character: EMPHASIS, attention and contrast<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Cognition, <\/span><span class=\"tp_pub_additional_volume\">vol. 136, <\/span><span class=\"tp_pub_additional_pages\">pp. 222\u2013227, <\/span><span class=\"tp_pub_additional_year\">2015<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1884\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1884','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_1884\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1884','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1884\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1884','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1884\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Chen2015b,<br \/>\r\ntitle = {Emphasizing the only character: EMPHASIS, attention and contrast},<br \/>\r\nauthor = {Lijing Chen and Yufang Yang},<br \/>\r\ndoi = {10.1016\/j.cognition.2014.11.015},<br \/>\r\nyear  = {2015},<br \/>\r\ndate = {2015-01-01},<br \/>\r\njournal = {Cognition},<br \/>\r\nvolume = {136},<br \/>\r\npages = {222\u2013227},<br \/>\r\npublisher = {Elsevier B.V.},<br \/>\r\nabstract = {In conversations, pragmatic information such as emphasis is important for identifying the speaker's\/writer's intention. The present research examines the cognitive processes involved in emphasis processing. Participants read short discourses that introduced one or two character(s), with the character being emphasized or non-emphasized in subsequent texts. Eye movements showed that: (1) early processing of the emphasized word was facilitated, which may have been due to increased attention allocation, whereas (2) late integration of the emphasized character was inhibited when the discourse involved only this character. These results indicate that it is necessary to include other characters as contrastive characters to facilitate the integration of an emphasized character, and support the existence of a relationship between Emphasis and Contrast computation. Taken together, our findings indicate that both attention allocation and contrast computation are involved in emphasis processing, and support the incremental nature of sentence processing and the importance of contrast in discourse comprehension.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1884','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1884\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In conversations, pragmatic information such as emphasis is important for identifying the speaker's\/writer's intention. The present research examines the cognitive processes involved in emphasis processing. Participants read short discourses that introduced one or two character(s), with the character being emphasized or non-emphasized in subsequent texts. Eye movements showed that: (1) early processing of the emphasized word was facilitated, which may have been due to increased attention allocation, whereas (2) late integration of the emphasized character was inhibited when the discourse involved only this character. These results indicate that it is necessary to include other characters as contrastive characters to facilitate the integration of an emphasized character, and support the existence of a relationship between Emphasis and Contrast computation. Taken together, our findings indicate that both attention allocation and contrast computation are involved in emphasis processing, and support the incremental nature of sentence processing and the importance of contrast in discourse comprehension.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1884','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_1884\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.cognition.2014.11.015\" title=\"Follow DOI:10.1016\/j.cognition.2014.11.015\" target=\"_blank\">doi:10.1016\/j.cognition.2014.11.015<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1884','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Juhasz, Barbara J.;  Gullick, Margaret M.;  Shesler, Leah W.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('5706','tp_links')\" style=\"cursor:pointer;\">The effects of age-of-Aacquisition on ambiguity resolution: Evidence from eye movements<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Eye Movement Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 4, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201314, <\/span><span class=\"tp_pub_additional_year\">2011<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_5706\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5706','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_5706\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5706','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_5706\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5706','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_5706\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Juhasz2011a,<br \/>\r\ntitle = {The effects of age-of-Aacquisition on ambiguity resolution: Evidence from eye movements},<br \/>\r\nauthor = {Barbara J. Juhasz and Margaret M. Gullick and Leah W. Shesler},<br \/>\r\ndoi = {10.16910\/jemr.4.1.4},<br \/>\r\nyear  = {2011},<br \/>\r\ndate = {2011-01-01},<br \/>\r\njournal = {Journal of Eye Movement Research},<br \/>\r\nvolume = {4},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u201314},<br \/>\r\nabstract = {Words that are rated as acquired earlier in life receive shorter fixation durations than later acquired words, even when word frequency is adequately controlled (Juhasz & Rayner, 2003; 2006). Some theories posit that age-of-acquisition (AoA) affects the semantic representation of words (e.g., Steyvers & Tenenbaum, 2005), while others suggest that AoA should have an influence at multiple levels in the mental lexicon (e.g. Ellis & Lambon Ralph, 2000). In past studies, early and late AoA words have differed from each other in orthography, phonology, and meaning, making it difficult to localize the influence of AoA. Two experiments are reported which examined the locus of AoA effects in reading. Both experiments used balanced ambiguous words which have two equally-frequent meanings acquired at different times (e.g. pot, tick). In Experiment 1, sentence context supporting either the early- or late-acquired meaning was presented prior to the ambiguous word; in Experiment 2, disambiguating context was presented after the ambiguous word. When prior context disambiguated the ambiguous word, meaning AoA influenced the processing of the target word. However, when disambiguating sentence context followed the ambiguous word, meaning frequency was the more important variable and no effect of meaning AoA was observed. These results, when combined with the past results of Juhasz and Rayner (2003; 2006) suggest that AoA influences access to multiple levels of representation in the mental lexicon. The results also have implications for theories of lexical ambiguity resolution, as they suggest that variables other than meaning frequency and context can influence resolution of noun-noun ambiguities.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5706','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_5706\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Words that are rated as acquired earlier in life receive shorter fixation durations than later acquired words, even when word frequency is adequately controlled (Juhasz &amp; Rayner, 2003; 2006). Some theories posit that age-of-acquisition (AoA) affects the semantic representation of words (e.g., Steyvers &amp; Tenenbaum, 2005), while others suggest that AoA should have an influence at multiple levels in the mental lexicon (e.g. Ellis &amp; Lambon Ralph, 2000). In past studies, early and late AoA words have differed from each other in orthography, phonology, and meaning, making it difficult to localize the influence of AoA. Two experiments are reported which examined the locus of AoA effects in reading. Both experiments used balanced ambiguous words which have two equally-frequent meanings acquired at different times (e.g. pot, tick). In Experiment 1, sentence context supporting either the early- or late-acquired meaning was presented prior to the ambiguous word; in Experiment 2, disambiguating context was presented after the ambiguous word. When prior context disambiguated the ambiguous word, meaning AoA influenced the processing of the target word. However, when disambiguating sentence context followed the ambiguous word, meaning frequency was the more important variable and no effect of meaning AoA was observed. These results, when combined with the past results of Juhasz and Rayner (2003; 2006) suggest that AoA influences access to multiple levels of representation in the mental lexicon. The results also have implications for theories of lexical ambiguity resolution, as they suggest that variables other than meaning frequency and context can influence resolution of noun-noun ambiguities.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5706','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_5706\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.16910\/jemr.4.1.4\" title=\"Follow DOI:10.16910\/jemr.4.1.4\" target=\"_blank\">doi:10.16910\/jemr.4.1.4<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5706','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Huestegge, Lynn;  Skottke, Eva Maria;  Anders, Sina;  M\u00fcsseler, Jochen;  Debus, G\u00fcnter<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('5218','tp_links')\" style=\"cursor:pointer;\">The development of hazard perception: Dissociation of visual orientation and hazard processing<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Transportation Research Part F: Traffic Psychology and Behaviour, <\/span><span class=\"tp_pub_additional_volume\">vol. 13, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u20138, <\/span><span class=\"tp_pub_additional_year\">2010<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_5218\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5218','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_5218\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5218','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_5218\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5218','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_5218\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Huestegge2010e,<br \/>\r\ntitle = {The development of hazard perception: Dissociation of visual orientation and hazard processing},<br \/>\r\nauthor = {Lynn Huestegge and Eva Maria Skottke and Sina Anders and Jochen M\u00fcsseler and G\u00fcnter Debus},<br \/>\r\ndoi = {10.1016\/j.trf.2009.09.005},<br \/>\r\nyear  = {2010},<br \/>\r\ndate = {2010-01-01},<br \/>\r\njournal = {Transportation Research Part F: Traffic Psychology and Behaviour},<br \/>\r\nvolume = {13},<br \/>\r\nnumber = {1},<br \/>\r\npages = {1\u20138},<br \/>\r\npublisher = {Elsevier Ltd},<br \/>\r\nabstract = {Eye movements are a key behavior for visual information processing in traffic situations and for vehicle control. Previous research showed that effective ways of eye guidance are related to better hazard perception skills. Furthermore, hazard perception is reported to be faster for experienced drivers as compared to novice drivers. However, little is known whether this difference can be attributed to the development of visual orientation, or hazard processing. In the present study, we compared eye movements of 20 inexperienced and 20 experienced drivers in a hazard perception task. We separately measured (a) the interval between the onset of a static hazard scene and the first fixation on a potential hazard, and (b) the interval between the first fixation on a potential hazard and the final response. While overall RT was faster for experienced compared to inexperienced drivers, the scanning patterns revealed that this difference was due to faster processing after the initial fixation on the hazard, whereas scene scanning times until the initial fixation on the hazard did not differ between groups. \u00a9 2009 Elsevier Ltd. All rights reserved.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5218','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_5218\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Eye movements are a key behavior for visual information processing in traffic situations and for vehicle control. Previous research showed that effective ways of eye guidance are related to better hazard perception skills. Furthermore, hazard perception is reported to be faster for experienced drivers as compared to novice drivers. However, little is known whether this difference can be attributed to the development of visual orientation, or hazard processing. In the present study, we compared eye movements of 20 inexperienced and 20 experienced drivers in a hazard perception task. We separately measured (a) the interval between the onset of a static hazard scene and the first fixation on a potential hazard, and (b) the interval between the first fixation on a potential hazard and the final response. While overall RT was faster for experienced compared to inexperienced drivers, the scanning patterns revealed that this difference was due to faster processing after the initial fixation on the hazard, whereas scene scanning times until the initial fixation on the hazard did not differ between groups. \u00a9 2009 Elsevier Ltd. All rights reserved.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5218','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_5218\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.trf.2009.09.005\" title=\"Follow DOI:10.1016\/j.trf.2009.09.005\" target=\"_blank\">doi:10.1016\/j.trf.2009.09.005<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5218','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> McMullen, Patricia A.;  MacSween, Lesley E.;  Collin, Charles A.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('7860','tp_links')\" style=\"cursor:pointer;\">Behavioral effects of visual field location on processing motion- and luminance-defined form<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Vision, <\/span><span class=\"tp_pub_additional_volume\">vol. 9, <\/span><span class=\"tp_pub_additional_number\">no. 6, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u201311, <\/span><span class=\"tp_pub_additional_year\">2009<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_7860\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7860','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_7860\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7860','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_7860\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7860','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_7860\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{McMullen2009,<br \/>\r\ntitle = {Behavioral effects of visual field location on processing motion- and luminance-defined form},<br \/>\r\nauthor = {Patricia A. McMullen and Lesley E. MacSween and Charles A. Collin},<br \/>\r\ndoi = {10.1167\/9.6.24},<br \/>\r\nyear  = {2009},<br \/>\r\ndate = {2009-01-01},<br \/>\r\njournal = {Journal of Vision},<br \/>\r\nvolume = {9},<br \/>\r\nnumber = {6},<br \/>\r\npages = {1\u201311},<br \/>\r\nabstract = {Traditional theories posit a ventral cortical visual pathway subserving object recognition regardless of the information defining the contour. However, functional magnetic resonance imaging (fMRI) studies have shown dorsal cortical activity during visual processing of static luminance-defined (SL) and motion-defined form (MDF). It is unknown if this activity is supported behaviorally, or if it depends on central or peripheral vision. The present study compared behavioral performance with two types of MDF [one without translational motion (MDF) and another with (TM)] and SL shapes in a shape matching task where shape pairs appeared in the upper or lower visual fields or along the horizontal meridian of central or peripheral vision. MDF matching was superior to the other contour types regardless of location in central vision. Both MDF and TM matching was superior to SL matching for presentations in peripheral vision. Importantly, there was an advantage for MDF and TM matching in the lower peripheral visual field that was not present for SL forms. These results are consistent with previous behavioral findings that show no field advantage for static form processing and a lower field advantage for motion processing. They are also suggestive of more dorsal cortical involvement in the processing of shapes defined by motion than luminance.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7860','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_7860\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Traditional theories posit a ventral cortical visual pathway subserving object recognition regardless of the information defining the contour. However, functional magnetic resonance imaging (fMRI) studies have shown dorsal cortical activity during visual processing of static luminance-defined (SL) and motion-defined form (MDF). It is unknown if this activity is supported behaviorally, or if it depends on central or peripheral vision. The present study compared behavioral performance with two types of MDF [one without translational motion (MDF) and another with (TM)] and SL shapes in a shape matching task where shape pairs appeared in the upper or lower visual fields or along the horizontal meridian of central or peripheral vision. MDF matching was superior to the other contour types regardless of location in central vision. Both MDF and TM matching was superior to SL matching for presentations in peripheral vision. Importantly, there was an advantage for MDF and TM matching in the lower peripheral visual field that was not present for SL forms. These results are consistent with previous behavioral findings that show no field advantage for static form processing and a lower field advantage for motion processing. They are also suggestive of more dorsal cortical involvement in the processing of shapes defined by motion than luminance.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7860','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_7860\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1167\/9.6.24\" title=\"Follow DOI:10.1167\/9.6.24\" target=\"_blank\">doi:10.1167\/9.6.24<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7860','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Fornos, Ang\u00e9lica P\u00e9rez;  Sommerhalder, J\u00f6rg;  Rappaz, Benjamin;  Pelizzone, Marco;  Safran, Avinoam B.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('3610','tp_links')\" style=\"cursor:pointer;\">Processes involved in oculomotor adaptation to eccentric reading<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Investigative Ophthalmology &amp; Visual Science, <\/span><span class=\"tp_pub_additional_volume\">vol. 47, <\/span><span class=\"tp_pub_additional_number\">no. 4, <\/span><span class=\"tp_pub_additional_pages\">pp. 1439\u20131447, <\/span><span class=\"tp_pub_additional_year\">2006<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_3610\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3610','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_3610\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3610','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_3610\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3610','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_3610\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Fornos2006,<br \/>\r\ntitle = {Processes involved in oculomotor adaptation to eccentric reading},<br \/>\r\nauthor = {Ang\u00e9lica P\u00e9rez Fornos and J\u00f6rg Sommerhalder and Benjamin Rappaz and Marco Pelizzone and Avinoam B. Safran},<br \/>\r\ndoi = {10.1167\/iovs.05-0973},<br \/>\r\nyear  = {2006},<br \/>\r\ndate = {2006-01-01},<br \/>\r\njournal = {Investigative Ophthalmology & Visual Science},<br \/>\r\nvolume = {47},<br \/>\r\nnumber = {4},<br \/>\r\npages = {1439\u20131447},<br \/>\r\nabstract = {PURPOSE: Adaptation to eccentric viewing in subjects with a central scotoma remains poorly understood. The purpose of this study was to analyze the adaptation stages of oculomotor control to forced eccentric reading in normal subjects. METHODS: Three normal adults (25.7 +\/- 3.8 years of age) were trained to read full-page texts using a restricted 10 degrees x 7 degrees viewing window stabilized at 15 degrees eccentricity (lower visual field). Gaze position was recorded throughout the training period (1 hour per day for approximately 6 weeks). RESULTS: In the first sessions, eye movements appeared inappropriate for reading, mainly consisting of reflexive vertical (foveating) saccades. In early adaptation phases, both vertical saccade count and amplitude dramatically decreased. Horizontal saccade frequency increased in the first experimental sessions, then slowly decreased after 7 to 15 sessions. Amplitude of horizontal saccades increased with training. Gradually, accurate line jumps appeared, the proportion of progressive saccades increased, and the proportion of regressive saccades decreased. At the end of the learning process, eye movements mainly consisted of horizontal progressions, line jumps, and a few horizontal regressions. CONCLUSIONS: Two main adaptation phases were distinguished: a \"faster\" vertical process aimed at suppressing reflexive foveation and a \"slower\" restructuring of the horizontal eye movement pattern. The vertical phase consisted of a rapid reduction in the number of vertical saccades and a rapid but more progressive adjustment of remaining vertical saccades. The horizontal phase involved the amplitude adjustment of horizontal saccades (mainly progressions) to the text presented and the reduction of regressions required.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3610','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_3610\" style=\"display:none;\"><div class=\"tp_abstract_entry\">PURPOSE: Adaptation to eccentric viewing in subjects with a central scotoma remains poorly understood. The purpose of this study was to analyze the adaptation stages of oculomotor control to forced eccentric reading in normal subjects. METHODS: Three normal adults (25.7 +\/- 3.8 years of age) were trained to read full-page texts using a restricted 10 degrees x 7 degrees viewing window stabilized at 15 degrees eccentricity (lower visual field). Gaze position was recorded throughout the training period (1 hour per day for approximately 6 weeks). RESULTS: In the first sessions, eye movements appeared inappropriate for reading, mainly consisting of reflexive vertical (foveating) saccades. In early adaptation phases, both vertical saccade count and amplitude dramatically decreased. Horizontal saccade frequency increased in the first experimental sessions, then slowly decreased after 7 to 15 sessions. Amplitude of horizontal saccades increased with training. Gradually, accurate line jumps appeared, the proportion of progressive saccades increased, and the proportion of regressive saccades decreased. At the end of the learning process, eye movements mainly consisted of horizontal progressions, line jumps, and a few horizontal regressions. CONCLUSIONS: Two main adaptation phases were distinguished: a \"faster\" vertical process aimed at suppressing reflexive foveation and a \"slower\" restructuring of the horizontal eye movement pattern. The vertical phase consisted of a rapid reduction in the number of vertical saccades and a rapid but more progressive adjustment of remaining vertical saccades. The horizontal phase involved the amplitude adjustment of horizontal saccades (mainly progressions) to the text presented and the reduction of regressions required.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3610','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_3610\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1167\/iovs.05-0973\" title=\"Follow DOI:10.1167\/iovs.05-0973\" target=\"_blank\">doi:10.1167\/iovs.05-0973<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3610','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lehtim\u00e4ki, Taina M.;  Reilly, Ronan G.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('6776','tp_links')\" style=\"cursor:pointer;\">Improving eye movement control in young readers<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Artificial Intelligence Review, <\/span><span class=\"tp_pub_additional_volume\">vol. 24, <\/span><span class=\"tp_pub_additional_number\">no. 3-4, <\/span><span class=\"tp_pub_additional_pages\">pp. 477\u2013488, <\/span><span class=\"tp_pub_additional_year\">2005<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_6776\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6776','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_6776\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6776','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_6776\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6776','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_6776\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Lehtimaeki2005,<br \/>\r\ntitle = {Improving eye movement control in young readers},<br \/>\r\nauthor = {Taina M. Lehtim\u00e4ki and Ronan G. Reilly},<br \/>\r\ndoi = {10.1007\/s10462-005-9010-x},<br \/>\r\nyear  = {2005},<br \/>\r\ndate = {2005-01-01},<br \/>\r\njournal = {Artificial Intelligence Review},<br \/>\r\nvolume = {24},<br \/>\r\nnumber = {3-4},<br \/>\r\npages = {477\u2013488},<br \/>\r\nabstract = {The objective of our study is to design and evaluate an oculomotor reading aid for beginning readers. The aid consists of an eye-tracking device and a computer program that gives real-time feedback in the form of a game to the subject about their fixation position on words. An experimental study was conducted with 8-year-old children. We evaluated the effectiveness of the aid for each child by comparing the landing site distributions before and after playing the game. We found that the peak of the landing site distribution moved towards the optimal viewing position (OVP) for word identification after playing the game. We also determined that training had a positive effect on gaze duration, on the mean and distribution of number of fixations per word, and on the percentage of words with refixations in the majority of subjects.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6776','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_6776\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The objective of our study is to design and evaluate an oculomotor reading aid for beginning readers. The aid consists of an eye-tracking device and a computer program that gives real-time feedback in the form of a game to the subject about their fixation position on words. An experimental study was conducted with 8-year-old children. We evaluated the effectiveness of the aid for each child by comparing the landing site distributions before and after playing the game. We found that the peak of the landing site distribution moved towards the optimal viewing position (OVP) for word identification after playing the game. We also determined that training had a positive effect on gaze duration, on the mean and distribution of number of fixations per word, and on the percentage of words with refixations in the majority of subjects.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6776','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_6776\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s10462-005-9010-x\" title=\"Follow DOI:10.1007\/s10462-005-9010-x\" target=\"_blank\">doi:10.1007\/s10462-005-9010-x<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6776','tp_links')\">Close<\/a><\/p><\/div><\/td><\/tr><\/table><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-contact\">Contact<\/h2>\n\n\n\n<p>If you would like us to feature your EyeLink research, have ideas for posts, or have any questions about our hardware and software, please contact us. We are always happy to help. You can call us (+1-613-271-8686) or click the button below to email:<\/p>\n\n\n<div class=\"mai-columns has-xl-margin-top has-xl-margin-bottom\"><div class=\"mai-columns-wrap has-columns\" style=\"--column-gap:var(--spacing-xl);--row-gap:var(--spacing-xl);--align-columns:start;--align-columns-vertical:initial;\">\n<div class=\"mai-column is-column\" style=\"--columns-xs:1\/1;--flex-xs:0 0 var(--flex-basis);--columns-sm:1\/1;--flex-sm:0 0 var(--flex-basis);--columns-md:1\/1;--flex-md:0 0 var(--flex-basis);--columns-lg:1\/1;--flex-lg:0 0 var(--flex-basis);--justify-content:start;\">\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button aligncenter\"><a class=\"wp-block-button__link button wp-element-button\" href=\"https:\/\/www.sr-research.com\/get-in-touch\/\">Get in Touch<\/a><\/div>\n<\/div>\n\n<\/div>\n<\/div><\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-references-amp-image-credits\">References &amp; Image Credits<\/h2>\n\n\n<style>.kt-accordion-id15792_4a7d72-ce .kt-accordion-inner-wrap{column-gap:var(--global-kb-gap-md, 2rem);row-gap:8px;}.kt-accordion-id15792_4a7d72-ce .kt-accordion-panel-inner{padding-top:var(--global-kb-spacing-sm, 1.5rem);padding-right:var(--global-kb-spacing-sm, 1.5rem);padding-bottom:var(--global-kb-spacing-sm, 1.5rem);padding-left:var(--global-kb-spacing-sm, 1.5rem);}.kt-accordion-id15792_4a7d72-ce > .kt-accordion-inner-wrap > .wp-block-kadence-pane > .kt-accordion-header-wrap > .kt-blocks-accordion-header{border-top-color:#555555;border-top-style:solid;border-right-color:#555555;border-right-style:solid;border-bottom-color:#555555;border-bottom-style:solid;border-left-color:#555555;border-left-style:solid;border-top-left-radius:0px;border-top-right-radius:0px;border-bottom-right-radius:0px;border-bottom-left-radius:0px;font-size:18px;line-height:24px;padding-top:10px;padding-right:14px;padding-bottom:10px;padding-left:14px;}.kt-accordion-id15792_4a7d72-ce > .kt-accordion-inner-wrap > .wp-block-kadence-pane > .kt-accordion-header-wrap > .kt-blocks-accordion-header:hover, \n\t\t\t\tbody:not(.hide-focus-outline) .kt-accordion-id15792_4a7d72-ce .kt-blocks-accordion-header:focus-visible{border-top-color:#eeeeee;border-top-style:solid;border-right-color:#eeeeee;border-right-style:solid;border-bottom-color:#eeeeee;border-bottom-style:solid;border-left-color:#eeeeee;border-left-style:solid;}.kt-accordion-id15792_4a7d72-ce .kt-accordion-header-wrap .kt-blocks-accordion-header:focus-visible,\n\t\t\t\t.kt-accordion-id15792_4a7d72-ce > .kt-accordion-inner-wrap > .wp-block-kadence-pane > .kt-accordion-header-wrap > .kt-blocks-accordion-header.kt-accordion-panel-active{border-top-color:#444444;border-top-style:solid;border-right-color:#444444;border-right-style:solid;border-bottom-color:#444444;border-bottom-style:solid;border-left-color:#444444;border-left-style:solid;}@media all and (max-width: 1024px){.kt-accordion-id15792_4a7d72-ce > .kt-accordion-inner-wrap > .wp-block-kadence-pane > .kt-accordion-header-wrap > .kt-blocks-accordion-header{border-top-color:#555555;border-top-style:solid;border-right-color:#555555;border-right-style:solid;border-bottom-color:#555555;border-bottom-style:solid;border-left-color:#555555;border-left-style:solid;}}@media all and (max-width: 1024px){.kt-accordion-id15792_4a7d72-ce > .kt-accordion-inner-wrap > .wp-block-kadence-pane > .kt-accordion-header-wrap > .kt-blocks-accordion-header:hover, \n\t\t\t\tbody:not(.hide-focus-outline) .kt-accordion-id15792_4a7d72-ce .kt-blocks-accordion-header:focus-visible{border-top-color:#eeeeee;border-top-style:solid;border-right-color:#eeeeee;border-right-style:solid;border-bottom-color:#eeeeee;border-bottom-style:solid;border-left-color:#eeeeee;border-left-style:solid;}}@media all and (max-width: 1024px){.kt-accordion-id15792_4a7d72-ce .kt-accordion-header-wrap .kt-blocks-accordion-header:focus-visible,\n\t\t\t\t.kt-accordion-id15792_4a7d72-ce > .kt-accordion-inner-wrap > .wp-block-kadence-pane > .kt-accordion-header-wrap > .kt-blocks-accordion-header.kt-accordion-panel-active{border-top-color:#444444;border-top-style:solid;border-right-color:#444444;border-right-style:solid;border-bottom-color:#444444;border-bottom-style:solid;border-left-color:#444444;border-left-style:solid;}}@media all and (max-width: 767px){.kt-accordion-id15792_4a7d72-ce .kt-accordion-inner-wrap{display:block;}.kt-accordion-id15792_4a7d72-ce .kt-accordion-inner-wrap .kt-accordion-pane:not(:first-child){margin-top:8px;}.kt-accordion-id15792_4a7d72-ce > .kt-accordion-inner-wrap > .wp-block-kadence-pane > .kt-accordion-header-wrap > .kt-blocks-accordion-header{border-top-color:#555555;border-top-style:solid;border-right-color:#555555;border-right-style:solid;border-bottom-color:#555555;border-bottom-style:solid;border-left-color:#555555;border-left-style:solid;}.kt-accordion-id15792_4a7d72-ce > .kt-accordion-inner-wrap > .wp-block-kadence-pane > .kt-accordion-header-wrap > .kt-blocks-accordion-header:hover, \n\t\t\t\tbody:not(.hide-focus-outline) .kt-accordion-id15792_4a7d72-ce .kt-blocks-accordion-header:focus-visible{border-top-color:#eeeeee;border-top-style:solid;border-right-color:#eeeeee;border-right-style:solid;border-bottom-color:#eeeeee;border-bottom-style:solid;border-left-color:#eeeeee;border-left-style:solid;}.kt-accordion-id15792_4a7d72-ce .kt-accordion-header-wrap .kt-blocks-accordion-header:focus-visible,\n\t\t\t\t.kt-accordion-id15792_4a7d72-ce > .kt-accordion-inner-wrap > .wp-block-kadence-pane > .kt-accordion-header-wrap > .kt-blocks-accordion-header.kt-accordion-panel-active{border-top-color:#444444;border-top-style:solid;border-right-color:#444444;border-right-style:solid;border-bottom-color:#444444;border-bottom-style:solid;border-left-color:#444444;border-left-style:solid;}}<\/style>\n<div class=\"wp-block-kadence-accordion alignnone\"><div class=\"kt-accordion-wrap kt-accordion-id15792_4a7d72-ce kt-accordion-has-2-panes kt-active-pane-0 kt-accordion-block kt-pane-header-alignment-left kt-accodion-icon-style-basic kt-accodion-icon-side-right\" style=\"max-width:none\"><div class=\"kt-accordion-inner-wrap\" data-allow-multiple-open=\"false\" data-start-open=\"none\">\n<div class=\"wp-block-kadence-pane kt-accordion-pane kt-accordion-pane-1 kt-pane15792_ce7241-19\"><div class=\"kt-accordion-header-wrap\"><button class=\"kt-blocks-accordion-header kt-acccordion-button-label-show\" type=\"button\"><span class=\"kt-blocks-accordion-title-wrap\"><span class=\"kt-blocks-accordion-title\">Image Credits<\/span><\/span><span class=\"kt-blocks-accordion-icon-trigger\"><\/span><\/button><\/div><div class=\"kt-accordion-panel kt-accordion-panel-hidden\"><div class=\"kt-accordion-panel-inner\">\n<ol class=\"wp-block-list\">\n<li>Header Image by <a href=\"https:\/\/pixabay.com\/photos\/books-education-school-literature-462579\/\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>Hermann<\/strong><\/a> (<a href=\"https:\/\/pixabay.com\/service\/license-summary\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Pixabay License<\/strong><\/a>) <\/li>\n<\/ol>\n<\/div><\/div><\/div>\n<\/div><\/div><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-read-more\">Read More<\/h2>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n","protected":false},"excerpt":{"rendered":"<p>We have recently finished updating our database of EyeLink publications &#8211; there were more than 900 papers published in 2019 alone, and the database now contains well over 8000 publications in total. Each publication is checked individually to ensure that it contains data collected using an EyeLink eye tracker (rather than just referring to data &hellip;<\/p>","protected":false},"author":5,"featured_media":16084,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[3390],"tags":[],"class_list":{"2":"type-post","7":"category-eyelink-research-articles","8":"entry"},"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Highly Cited EyeLink Articles - Fast, Accurate, Reliable Eye Tracking<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.sr-research.com\/zh\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Highly Cited EyeLink Articles\" \/>\n<meta property=\"og:description\" content=\"We have recently finished updating our database of EyeLink publications &#8211; there were more than 900 papers published in 2019 alone, and the database now contains well over 8000 publications in total. Each publication is checked individually to ensure that it contains data collected using an EyeLink eye tracker (rather than just referring to data &hellip;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.sr-research.com\/zh\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/\" \/>\n<meta property=\"og:site_name\" content=\"Fast, Accurate, Reliable Eye Tracking\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/SR-Research-Ltd-640093842854433\/\" \/>\n<meta property=\"article:published_time\" content=\"2020-02-24T16:43:52+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-25T16:34:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-960x400-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"960\" \/>\n\t<meta property=\"og:image:height\" content=\"400\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Hutton\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@SRResearchLtd\" \/>\n<meta name=\"twitter:site\" content=\"@SRResearchLtd\" \/>\n<meta name=\"twitter:label1\" content=\"\u4f5c\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Hutton\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/\"},\"author\":{\"name\":\"Sam Hutton\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#\\\/schema\\\/person\\\/5fa590fca288373447ad60095a13b7d0\"},\"headline\":\"Highly Cited EyeLink Articles\",\"datePublished\":\"2020-02-24T16:43:52+00:00\",\"dateModified\":\"2026-03-25T16:34:51+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/\"},\"wordCount\":387,\"publisher\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.sr-research.com\\\/wp-content\\\/uploads\\\/2020\\\/02\\\/eyelink-publications-2019-960x400-1.jpg\",\"articleSection\":[\"EyeLink Research Articles\"],\"inLanguage\":\"zh-Hans\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/\",\"name\":\"Highly Cited EyeLink Articles - Fast, Accurate, Reliable Eye Tracking\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.sr-research.com\\\/wp-content\\\/uploads\\\/2020\\\/02\\\/eyelink-publications-2019-960x400-1.jpg\",\"datePublished\":\"2020-02-24T16:43:52+00:00\",\"dateModified\":\"2026-03-25T16:34:51+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/#breadcrumb\"},\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/wp-content\\\/uploads\\\/2020\\\/02\\\/eyelink-publications-2019-960x400-1.jpg\",\"contentUrl\":\"https:\\\/\\\/www.sr-research.com\\\/wp-content\\\/uploads\\\/2020\\\/02\\\/eyelink-publications-2019-960x400-1.jpg\",\"width\":960,\"height\":400,\"caption\":\"EyeLink Eye Tracker Publications\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/eye-tracking-blog\\\/eyelink-research-articles\\\/highly-cited-eyelink-articles\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.sr-research.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Highly Cited EyeLink Articles\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#website\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/\",\"name\":\"Fast, Accurate, Reliable Eye Tracking\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.sr-research.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"zh-Hans\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#organization\",\"name\":\"SR Research Ltd.\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/wp-content\\\/uploads\\\/2017\\\/12\\\/sr-research-logo-square.jpg\",\"contentUrl\":\"https:\\\/\\\/www.sr-research.com\\\/wp-content\\\/uploads\\\/2017\\\/12\\\/sr-research-logo-square.jpg\",\"width\":512,\"height\":512,\"caption\":\"SR Research Ltd.\"},\"image\":{\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/SR-Research-Ltd-640093842854433\\\/\",\"https:\\\/\\\/x.com\\\/SRResearchLtd\",\"https:\\\/\\\/www.instagram.com\\\/srresearchltd\\\/\",\"https:\\\/\\\/ca.linkedin.com\\\/company\\\/sr-research-ltd\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UCCfE1oJHk4WLe9h30AcNOJg\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.sr-research.com\\\/#\\\/schema\\\/person\\\/5fa590fca288373447ad60095a13b7d0\",\"name\":\"Sam Hutton\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/dda5a17da66ce6033cdfd5e8e0a077d757b07b0bdc9f1f309a4dde47a0eca40d?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/dda5a17da66ce6033cdfd5e8e0a077d757b07b0bdc9f1f309a4dde47a0eca40d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/dda5a17da66ce6033cdfd5e8e0a077d757b07b0bdc9f1f309a4dde47a0eca40d?s=96&d=mm&r=g\",\"caption\":\"Sam Hutton\"},\"description\":\"Sam Hutton studied Experimental Psychology at the University of Sussex, and liked it so much he stayed there to do a PhD. His first encounter with an eye tracker was during his post-doctoral fellowship at Imperial College, London. He was based in the Neuro-opthalmology unit at Charing Cross Hospital for 6 years, and whilst there learned to use an ancient infrared eye tracking system to measure basic oculomotor function (prosaccades \\\/ antisaccades \\\/ smooth pursuit etc) in patients with neuropsychiatric and neurological disorders. He was hooked, and has been actively involved in eye tracking research in one way or another ever since. When not writing blogs (e.g. most of the time...) he works with the rest of the SR Research Support Team on a range of projects, taking a particular interest in issues involving eye tracking in clinical settings (for example eye tracking nystagmus \\\/ neurological disorders) and pupillometry. He can also be found teaching eye tracking workshops on a range of topics, and generally trying to make sure that people don't make the same mistakes he did. He loves to eye track animals, and has so far managed to record gaze from mice, cats, dogs and a duck...\",\"url\":\"https:\\\/\\\/www.sr-research.com\\\/zh\\\/eye-tracking-blog\\\/author\\\/sam\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Highly Cited EyeLink Articles - Fast, Accurate, Reliable Eye Tracking","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.sr-research.com\/zh\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/","og_locale":"zh_CN","og_type":"article","og_title":"Highly Cited EyeLink Articles","og_description":"We have recently finished updating our database of EyeLink publications &#8211; there were more than 900 papers published in 2019 alone, and the database now contains well over 8000 publications in total. Each publication is checked individually to ensure that it contains data collected using an EyeLink eye tracker (rather than just referring to data &hellip;","og_url":"https:\/\/www.sr-research.com\/zh\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/","og_site_name":"Fast, Accurate, Reliable Eye Tracking","article_publisher":"https:\/\/www.facebook.com\/SR-Research-Ltd-640093842854433\/","article_published_time":"2020-02-24T16:43:52+00:00","article_modified_time":"2026-03-25T16:34:51+00:00","og_image":[{"width":960,"height":400,"url":"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-960x400-1.jpg","type":"image\/jpeg"}],"author":"Sam Hutton","twitter_card":"summary_large_image","twitter_creator":"@SRResearchLtd","twitter_site":"@SRResearchLtd","twitter_misc":{"\u4f5c\u8005":"Sam Hutton","\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"3 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/#article","isPartOf":{"@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/"},"author":{"name":"Sam Hutton","@id":"https:\/\/www.sr-research.com\/#\/schema\/person\/5fa590fca288373447ad60095a13b7d0"},"headline":"Highly Cited EyeLink Articles","datePublished":"2020-02-24T16:43:52+00:00","dateModified":"2026-03-25T16:34:51+00:00","mainEntityOfPage":{"@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/"},"wordCount":387,"publisher":{"@id":"https:\/\/www.sr-research.com\/#organization"},"image":{"@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/#primaryimage"},"thumbnailUrl":"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-960x400-1.jpg","articleSection":["EyeLink Research Articles"],"inLanguage":"zh-Hans"},{"@type":"WebPage","@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/","url":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/","name":"Highly Cited EyeLink Articles - Fast, Accurate, Reliable Eye Tracking","isPartOf":{"@id":"https:\/\/www.sr-research.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/#primaryimage"},"image":{"@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/#primaryimage"},"thumbnailUrl":"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-960x400-1.jpg","datePublished":"2020-02-24T16:43:52+00:00","dateModified":"2026-03-25T16:34:51+00:00","breadcrumb":{"@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/#breadcrumb"},"inLanguage":"zh-Hans","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/"]}]},{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/#primaryimage","url":"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-960x400-1.jpg","contentUrl":"https:\/\/www.sr-research.com\/wp-content\/uploads\/2020\/02\/eyelink-publications-2019-960x400-1.jpg","width":960,"height":400,"caption":"EyeLink Eye Tracker Publications"},{"@type":"BreadcrumbList","@id":"https:\/\/www.sr-research.com\/eye-tracking-blog\/eyelink-research-articles\/highly-cited-eyelink-articles\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.sr-research.com\/"},{"@type":"ListItem","position":2,"name":"Highly Cited EyeLink Articles"}]},{"@type":"WebSite","@id":"https:\/\/www.sr-research.com\/#website","url":"https:\/\/www.sr-research.com\/","name":"\u9ad8\u901f\u3001\u7cbe\u51c6\u548c\u53ef\u9760\u7684\u773c\u52a8\u8ffd\u8e2a\u89e3\u51b3\u65b9\u6848","description":"","publisher":{"@id":"https:\/\/www.sr-research.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.sr-research.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"zh-Hans"},{"@type":"Organization","@id":"https:\/\/www.sr-research.com\/#organization","name":"SR Research Ltd.","url":"https:\/\/www.sr-research.com\/","logo":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/www.sr-research.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.sr-research.com\/wp-content\/uploads\/2017\/12\/sr-research-logo-square.jpg","contentUrl":"https:\/\/www.sr-research.com\/wp-content\/uploads\/2017\/12\/sr-research-logo-square.jpg","width":512,"height":512,"caption":"SR Research Ltd."},"image":{"@id":"https:\/\/www.sr-research.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/SR-Research-Ltd-640093842854433\/","https:\/\/x.com\/SRResearchLtd","https:\/\/www.instagram.com\/srresearchltd\/","https:\/\/ca.linkedin.com\/company\/sr-research-ltd","https:\/\/www.youtube.com\/channel\/UCCfE1oJHk4WLe9h30AcNOJg"]},{"@type":"Person","@id":"https:\/\/www.sr-research.com\/#\/schema\/person\/5fa590fca288373447ad60095a13b7d0","name":"Sam Hutton","image":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/secure.gravatar.com\/avatar\/dda5a17da66ce6033cdfd5e8e0a077d757b07b0bdc9f1f309a4dde47a0eca40d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/dda5a17da66ce6033cdfd5e8e0a077d757b07b0bdc9f1f309a4dde47a0eca40d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/dda5a17da66ce6033cdfd5e8e0a077d757b07b0bdc9f1f309a4dde47a0eca40d?s=96&d=mm&r=g","caption":"Sam Hutton"},"description":"Sam Hutton studied Experimental Psychology at the University of Sussex, and liked it so much he stayed there to do a PhD. His first encounter with an eye tracker was during his post-doctoral fellowship at Imperial College, London. He was based in the Neuro-opthalmology unit at Charing Cross Hospital for 6 years, and whilst there learned to use an ancient infrared eye tracking system to measure basic oculomotor function (prosaccades \/ antisaccades \/ smooth pursuit etc) in patients with neuropsychiatric and neurological disorders. He was hooked, and has been actively involved in eye tracking research in one way or another ever since. When not writing blogs (e.g. most of the time...) he works with the rest of the SR Research Support Team on a range of projects, taking a particular interest in issues involving eye tracking in clinical settings (for example eye tracking nystagmus \/ neurological disorders) and pupillometry. He can also be found teaching eye tracking workshops on a range of topics, and generally trying to make sure that people don't make the same mistakes he did. He loves to eye track animals, and has so far managed to record gaze from mice, cats, dogs and a duck...","url":"https:\/\/www.sr-research.com\/zh\/eye-tracking-blog\/author\/sam\/"}]}},"acf":[],"_links":{"self":[{"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/posts\/15792","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/comments?post=15792"}],"version-history":[{"count":74,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/posts\/15792\/revisions"}],"predecessor-version":[{"id":34849,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/posts\/15792\/revisions\/34849"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/media\/16084"}],"wp:attachment":[{"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/media?parent=15792"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/categories?post=15792"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.sr-research.com\/zh\/wp-json\/wp\/v2\/tags?post=15792"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}