• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Fast, Accurate, Reliable Eye Tracking

Fast, Accurate, Reliable Eye Tracking

  • Hardware
    • EyeLink 1000 Plus
    • EyeLink Portable Duo
    • fMRI and MEG Systems
    • EyeLink II
    • Hardware Integration
  • Software
    • Experiment Builder
    • Data Viewer
    • WebLink
    • Software Integration
    • Purchase Licenses
  • Solutions
    • Reading and Language
    • Developmental
    • fMRI and MEG
    • EEG and fNIRS
    • Clinical and Oculomotor
    • Cognitive
    • Usability and Applied
    • Non Human Primate
  • Support
    • Forum
    • Resources
    • Useful Apps
    • Training
  • About
    • About Us
    • EyeLink Publications
    • Events
    • Manufacturing
    • Careers
    • About Eye Tracking
    • Newsletter
  • Blog
  • Contact
  • 简体中文
eye tracking research

Usability / Applied Publications

EyeLink Usability / Applied Publications

All EyeLink usability and applied research publications up until 2021 (with some early 2022s) are listed below by year. You can search the publications using keywords such as Driving, Sport, Workload, etc. You can also search for individual author names. If we missed any EyeLink usability or applied article, please email us!

387 entries « ‹ 1 of 4 › »

2022

Carlos Sillero‐Rejon; Osama Mahmoud; Ricardo M. Tamayo; Alvaro Arturo Clavijo‐Alvarez; Sally Adams; Olivia M. Maynard

Standardised packs and larger health warnings: Visual attention and perceptions among Colombian smokers and non‐smokers Journal Article

In: Addiction, pp. 1–11, 2022.

Abstract | Links | BibTeX

@article{Sillero‐Rejon2022,
title = {Standardised packs and larger health warnings: Visual attention and perceptions among Colombian smokers and non‐smokers},
author = {Carlos Sillero‐Rejon and Osama Mahmoud and Ricardo M. Tamayo and Alvaro Arturo Clavijo‐Alvarez and Sally Adams and Olivia M. Maynard},
doi = {10.1111/add.15779},
year = {2022},
date = {2022-01-01},
journal = {Addiction},
pages = {1--11},
abstract = {Aims: To measure how cigarette packaging (standardised packaging and branded packag- ing) and health warning size affect visual attention and pack preferences among Colombian smokers and non-smokers. Design: To explore visual attention, we used an eye-tracking experiment where non- smokers, weekly smokers and daily smokers were shown cigarette packs varying in warning size (30%-pictorial on top of the text, 30%-pictorial and text side-by-side, 50%, 70%) and packaging (standardised packaging, branded packaging). We used a discrete choice experiment (DCE) to examine the impact of warning size, packaging and brand name on preferences to try, taste perceptions and perceptions of harm. Setting: Eye-tracking laboratory, Universidad Nacional de Colombia, Bogotá, Colombia. Participants: Participants (n = 175) were 18 to 40 years old. Measurements: For the eye-tracking experiment, our primary outcome measure was the number of fixations toward the health warning compared with the branding. For the DCE, outcome measures were preferences to try, taste perceptions and harm perceptions. Findings: We observed greater visual attention to warning labels on standardised versus branded packages (F[3,167] = 22.87, P < 0.001) and when warnings were larger (F[9,161] = 147.17, P < 0.001); as warning size increased, the difference in visual attention to warnings between standardised and branded packaging decreased (F[9,161] = 4.44, P < 0.001). Non-smokers visually attended toward the warnings more than smokers, but as warning size increased these differences decreased (F[6,334] = 2.92},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Aims: To measure how cigarette packaging (standardised packaging and branded packag- ing) and health warning size affect visual attention and pack preferences among Colombian smokers and non-smokers. Design: To explore visual attention, we used an eye-tracking experiment where non- smokers, weekly smokers and daily smokers were shown cigarette packs varying in warning size (30%-pictorial on top of the text, 30%-pictorial and text side-by-side, 50%, 70%) and packaging (standardised packaging, branded packaging). We used a discrete choice experiment (DCE) to examine the impact of warning size, packaging and brand name on preferences to try, taste perceptions and perceptions of harm. Setting: Eye-tracking laboratory, Universidad Nacional de Colombia, Bogotá, Colombia. Participants: Participants (n = 175) were 18 to 40 years old. Measurements: For the eye-tracking experiment, our primary outcome measure was the number of fixations toward the health warning compared with the branding. For the DCE, outcome measures were preferences to try, taste perceptions and harm perceptions. Findings: We observed greater visual attention to warning labels on standardised versus branded packages (F[3,167] = 22.87, P < 0.001) and when warnings were larger (F[9,161] = 147.17, P < 0.001); as warning size increased, the difference in visual attention to warnings between standardised and branded packaging decreased (F[9,161] = 4.44, P < 0.001). Non-smokers visually attended toward the warnings more than smokers, but as warning size increased these differences decreased (F[6,334] = 2.92

Close

  • doi:10.1111/add.15779

Close

Nadezhda Kerimova; Pavel Sivokhin; Diana Kodzokova; Karine Nikogosyan; Vasily Klucharev

Visual processing of green zones in shared courtyards during renting decisions: An eye-tracking study Journal Article

In: Urban Forestry and Urban Greening, vol. 68, pp. 127460, 2022.

Abstract | Links | BibTeX

@article{Kerimova2022,
title = {Visual processing of green zones in shared courtyards during renting decisions: An eye-tracking study},
author = {Nadezhda Kerimova and Pavel Sivokhin and Diana Kodzokova and Karine Nikogosyan and Vasily Klucharev},
doi = {10.1016/j.ufug.2022.127460},
year = {2022},
date = {2022-01-01},
journal = {Urban Forestry and Urban Greening},
volume = {68},
pages = {127460},
publisher = {Elsevier GmbH},
abstract = {We used an eye-tracking technique to investigate the effect of green zones and car ownership on the attrac- tiveness of the courtyards of multistorey apartment buildings. Two interest groups—20 people who owned a car and 20 people who did not a car—observed 36 images of courtyards. Images were digitally modified to manipulate the spatial arrangement of key courtyard elements: green zones, parking lots, and children's play- grounds. The participants were asked to rate the attractiveness of courtyards during hypothetical renting decisions. Overall, we investigated whether visual exploration and appraisal of courtyards differed between people who owned a car and those who did not. The participants in both interest groups gazed longer at perceptually salient playgrounds and parking lots than at greenery. We also observed that participants gazed significantly longer at the greenery in courtyards rated as most attractive than those rated as least attractive. They gazed significantly longer at parking lots in courtyards rated as least attractive than those rated as most attractive. Using regression analysis, we further investigated the relationship between gaze fixations on courtyard elements and the attractiveness ratings of courtyards. The model confirmed a significant positive relationship between the number and duration of fixations on greenery and the attractiveness estimates of courtyards, while the model showed an opposite relationship for the duration of fixations on parking lots. Interestingly, the positive association between fixations on greenery and the attractiveness of courtyards was significantly stronger for participants who owned cars than for those who did not. These findings confirmed that the more people pay attention to green areas, the more positively they evaluate urban areas. The results also indicate that urban greenery may differentially affect the preferences of interest groups. 1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We used an eye-tracking technique to investigate the effect of green zones and car ownership on the attrac- tiveness of the courtyards of multistorey apartment buildings. Two interest groups—20 people who owned a car and 20 people who did not a car—observed 36 images of courtyards. Images were digitally modified to manipulate the spatial arrangement of key courtyard elements: green zones, parking lots, and children's play- grounds. The participants were asked to rate the attractiveness of courtyards during hypothetical renting decisions. Overall, we investigated whether visual exploration and appraisal of courtyards differed between people who owned a car and those who did not. The participants in both interest groups gazed longer at perceptually salient playgrounds and parking lots than at greenery. We also observed that participants gazed significantly longer at the greenery in courtyards rated as most attractive than those rated as least attractive. They gazed significantly longer at parking lots in courtyards rated as least attractive than those rated as most attractive. Using regression analysis, we further investigated the relationship between gaze fixations on courtyard elements and the attractiveness ratings of courtyards. The model confirmed a significant positive relationship between the number and duration of fixations on greenery and the attractiveness estimates of courtyards, while the model showed an opposite relationship for the duration of fixations on parking lots. Interestingly, the positive association between fixations on greenery and the attractiveness of courtyards was significantly stronger for participants who owned cars than for those who did not. These findings confirmed that the more people pay attention to green areas, the more positively they evaluate urban areas. The results also indicate that urban greenery may differentially affect the preferences of interest groups. 1.

Close

  • doi:10.1016/j.ufug.2022.127460

Close

2021

Chou P. Hung; Chloe Callahan-Flintoft; Paul D. Fedele; Kim F. Fluitt; Barry D. Vaughan; Anthony J. Walker; Min Wei

Low-contrast acuity under strong luminance dynamics and potential benefits of divisive display augmented reality Journal Article

In: Journal of Perceptual Imaging, vol. 4, no. 1, pp. 1–9, 2021.

Abstract | Links | BibTeX

@article{Hung2021,
title = {Low-contrast acuity under strong luminance dynamics and potential benefits of divisive display augmented reality},
author = {Chou P. Hung and Chloe Callahan-Flintoft and Paul D. Fedele and Kim F. Fluitt and Barry D. Vaughan and Anthony J. Walker and Min Wei},
doi = {10.2352/j.percept.imaging.2021.4.1.010501},
year = {2021},
date = {2021-01-01},
journal = {Journal of Perceptual Imaging},
volume = {4},
number = {1},
pages = {1--9},
abstract = {Abstract Understanding and predicting outdoor visual performance in augmented reality (AR) requires characterizing and modeling vision under strong luminance dynamics, including luminance differences of 10000-to-1 in a single image (high dynamic range, HDR). Classic models of vision, based on displays with 100-to-1 luminance contrast, have limited ability to generalize to HDR environments. An important question is whether low-contrast visibility, potentially useful for titrating saliency for AR applications, is resilient to saccade-induced strong luminance dynamics. The authors developed an HDR display system with up to 100,000-to-1 contrast and assessed how strong luminance dynamics affect low-contrast visual acuity. They show that, immediately following flashes of 25× or 100× luminance, visual acuity is unaffected at 90% letter Weber contrast and only minimally affected at lower letter contrasts (up to +0.20 LogMAR for 10% contrast). The resilience of low-contrast acuity across luminance changes opens up research on divisive display AR (ddAR) to effectively titrate salience under naturalistic HDR luminance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Abstract Understanding and predicting outdoor visual performance in augmented reality (AR) requires characterizing and modeling vision under strong luminance dynamics, including luminance differences of 10000-to-1 in a single image (high dynamic range, HDR). Classic models of vision, based on displays with 100-to-1 luminance contrast, have limited ability to generalize to HDR environments. An important question is whether low-contrast visibility, potentially useful for titrating saliency for AR applications, is resilient to saccade-induced strong luminance dynamics. The authors developed an HDR display system with up to 100,000-to-1 contrast and assessed how strong luminance dynamics affect low-contrast visual acuity. They show that, immediately following flashes of 25× or 100× luminance, visual acuity is unaffected at 90% letter Weber contrast and only minimally affected at lower letter contrasts (up to +0.20 LogMAR for 10% contrast). The resilience of low-contrast acuity across luminance changes opens up research on divisive display AR (ddAR) to effectively titrate salience under naturalistic HDR luminance.

Close

  • doi:10.2352/j.percept.imaging.2021.4.1.010501

Close

Leah A. Irish; Allison C. Veronda; Amanda E. Lamsweerde; Michael P. Mead; Stephen A. Wonderlich

The process of developing a sleep health improvement plan: A lab-based model of self-help behavior Journal Article

In: International Journal of Behavioral Medicine, vol. 28, no. 1, pp. 96–106, 2021.

Abstract | Links | BibTeX

@article{Irish2021,
title = {The process of developing a sleep health improvement plan: A lab-based model of self-help behavior},
author = {Leah A. Irish and Allison C. Veronda and Amanda E. Lamsweerde and Michael P. Mead and Stephen A. Wonderlich},
doi = {10.1007/s12529-020-09904-6},
year = {2021},
date = {2021-01-01},
journal = {International Journal of Behavioral Medicine},
volume = {28},
number = {1},
pages = {96--106},
publisher = {International Journal of Behavioral Medicine},
abstract = {Background: Although self-help strategies to improve sleep are widely accessible, little is known about the ways in which individuals interact with these resources and the extent to which people are successful at improving their own sleep based on sleep health recommendations. The present study developed a lab-based model of self-help behavior by observing the development of sleep health improvement plans (SHIPs) and examining factors that may influence SHIP development. Method: Sixty healthy, young adults were identified as poor sleepers during one week of actigraphy baseline and recruited to develop and implement a SHIP. Participants viewed a list of sleep health recommendations through an eye tracker and provided information on their current sleep health habits. Each participant implemented their SHIP for 1 week during which sleep was assessed with actigraphy. Results: Current sleep health habits, but not patterns of visual attention, predicted SHIP goal selection. Sleep duration increased significantly during the week of SHIP implementation. Conclusions: Findings indicate that the SHIP protocol is an effective strategy for observing self-help behavior and examining factors that influence goal selection. The increase in sleep duration suggests that individuals may be successful at extending their own sleep, though causal mechanisms have not yet been established. This study presents a lab-based protocol for studying self-help sleep improvement behavior and takes an initial step toward gaining knowledge required to improve sleep health recommendations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Although self-help strategies to improve sleep are widely accessible, little is known about the ways in which individuals interact with these resources and the extent to which people are successful at improving their own sleep based on sleep health recommendations. The present study developed a lab-based model of self-help behavior by observing the development of sleep health improvement plans (SHIPs) and examining factors that may influence SHIP development. Method: Sixty healthy, young adults were identified as poor sleepers during one week of actigraphy baseline and recruited to develop and implement a SHIP. Participants viewed a list of sleep health recommendations through an eye tracker and provided information on their current sleep health habits. Each participant implemented their SHIP for 1 week during which sleep was assessed with actigraphy. Results: Current sleep health habits, but not patterns of visual attention, predicted SHIP goal selection. Sleep duration increased significantly during the week of SHIP implementation. Conclusions: Findings indicate that the SHIP protocol is an effective strategy for observing self-help behavior and examining factors that influence goal selection. The increase in sleep duration suggests that individuals may be successful at extending their own sleep, though causal mechanisms have not yet been established. This study presents a lab-based protocol for studying self-help sleep improvement behavior and takes an initial step toward gaining knowledge required to improve sleep health recommendations.

Close

  • doi:10.1007/s12529-020-09904-6

Close

Fatemeh Jam; Hamid Reza Azemati; Abdulhamid Ghanbaran; Jamal Esmaily; Reza Ebrahimpour

The role of expertise in visual exploration and aesthetic judgment of residential building façades: An eye-tracking study Journal Article

In: Psychology of Aesthetics, Creativity, and the Arts, pp. 1–16, 2021.

Abstract | Links | BibTeX

@article{Jam2021,
title = {The role of expertise in visual exploration and aesthetic judgment of residential building façades: An eye-tracking study},
author = {Fatemeh Jam and Hamid Reza Azemati and Abdulhamid Ghanbaran and Jamal Esmaily and Reza Ebrahimpour},
doi = {10.1037/aca0000377},
year = {2021},
date = {2021-01-01},
journal = {Psychology of Aesthetics, Creativity, and the Arts},
pages = {1--16},
abstract = {The building façade has considerable effects on the aesthetic experience of observers. However, the experience may differ depending on the observers' expertise. This study was conducted to explore the impact of expertise on preference, visual exploration, and cognitive experience during the aesthetic judgment of designed façades. For this purpose, we developed a paradigm in two separate parts: aesthetic judgment (AJ) and eye movement recording (EMR). Thirty-eight participants participated in this experiment in two groups (21 experts/17 nonexperts). The results revealed significant differences between the two groups in terms of the type and number of preferred façades, as well as eye movement indicators. In addition, based on judgment reaction time and fixation duration as proxy measures of cognitive experience, it was found that expertise might be correlated with cognitive load and task demand. The findings indicate the importance of façades for both groups and suggest that their physical attributes could be effectively manipulated to impact aesthetic experiences in relation to architectural designs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The building façade has considerable effects on the aesthetic experience of observers. However, the experience may differ depending on the observers' expertise. This study was conducted to explore the impact of expertise on preference, visual exploration, and cognitive experience during the aesthetic judgment of designed façades. For this purpose, we developed a paradigm in two separate parts: aesthetic judgment (AJ) and eye movement recording (EMR). Thirty-eight participants participated in this experiment in two groups (21 experts/17 nonexperts). The results revealed significant differences between the two groups in terms of the type and number of preferred façades, as well as eye movement indicators. In addition, based on judgment reaction time and fixation duration as proxy measures of cognitive experience, it was found that expertise might be correlated with cognitive load and task demand. The findings indicate the importance of façades for both groups and suggest that their physical attributes could be effectively manipulated to impact aesthetic experiences in relation to architectural designs.

Close

  • doi:10.1037/aca0000377

Close

Ondřej Javora; Tereza Hannemann; Kristina Volná; Filip Děchtěrenko; Tereza Tetourová; Tereza Stárková; Cyril Brom

Is contextual animation needed in multimedia learning games for children? An eye tracker study Journal Article

In: Journal of Computer Assisted Learning, vol. 37, no. 2, pp. 305–318, 2021.

Abstract | Links | BibTeX

@article{Javora2021,
title = {Is contextual animation needed in multimedia learning games for children? An eye tracker study},
author = {Ondřej Javora and Tereza Hannemann and Kristina Volná and Filip Děchtěrenko and Tereza Tetourová and Tereza Stárková and Cyril Brom},
doi = {10.1111/jcal.12489},
year = {2021},
date = {2021-01-01},
journal = {Journal of Computer Assisted Learning},
volume = {37},
number = {2},
pages = {305--318},
abstract = {The present study investigates affective-motivational, attention, and learning effects of unexplored emotional design manipulation: Contextual animation (animation of contextual elements) in multimedia learning game (MLGs) for children. Participants (N = 134; Mage = 9.25; Grades 3 and 4) learned either from an experimental version of the MLG with a high amount of contextual animation or from an identical MLG with no contextual animation (control). Children strongly preferred ($chi$2 = 87.04, p <.001) and found the experimental version more attractive (p <.001},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study investigates affective-motivational, attention, and learning effects of unexplored emotional design manipulation: Contextual animation (animation of contextual elements) in multimedia learning game (MLGs) for children. Participants (N = 134; Mage = 9.25; Grades 3 and 4) learned either from an experimental version of the MLG with a high amount of contextual animation or from an identical MLG with no contextual animation (control). Children strongly preferred ($chi$2 = 87.04, p <.001) and found the experimental version more attractive (p <.001

Close

  • doi:10.1111/jcal.12489

Close

Jia Jin; Chenchen Lin; Fenghua Wang; Ting Xu; Wuke Zhang

A study of cognitive effort involved in the framing effect of summary descriptions of online product reviews for search vs. experience products Journal Article

In: Electronic Commerce Research, pp. 1–22, 2021.

Abstract | Links | BibTeX

@article{Jin2021,
title = {A study of cognitive effort involved in the framing effect of summary descriptions of online product reviews for search vs. experience products},
author = {Jia Jin and Chenchen Lin and Fenghua Wang and Ting Xu and Wuke Zhang},
doi = {10.1007/s10660-021-09491-y},
year = {2021},
date = {2021-01-01},
journal = {Electronic Commerce Research},
pages = {1--22},
publisher = {Springer US},
abstract = {Few studies have focused on summary descriptions of online product reviews regarding purchase decisions, and there is a gap between individual product reviews and summary descriptions of online product reviews. The current study applied eye-tracking to explore how the product type moderates the framing effect of summary descriptions of product reviews on e-consumers' purchase decisions. The results showed that product type moderated the framing effect of summary reviews on e-consumers' purchase intention. Specifically, for search products, compared with a negative frame, a positive frame increased e-consumers' attention to function attributes and led to higher purchase intention. However, with experience products, e-consumers' attention and purchase intention did not vary across framing messages. Referring to information asymmetry theory and signal theory, we posit that the cognitive effort involved in summary review information is high for search products and low for experience products since summary reviews are a more useful signal in reducing information asymmetry for search products than for experience products. The theoretical and practical implications are also discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Few studies have focused on summary descriptions of online product reviews regarding purchase decisions, and there is a gap between individual product reviews and summary descriptions of online product reviews. The current study applied eye-tracking to explore how the product type moderates the framing effect of summary descriptions of product reviews on e-consumers' purchase decisions. The results showed that product type moderated the framing effect of summary reviews on e-consumers' purchase intention. Specifically, for search products, compared with a negative frame, a positive frame increased e-consumers' attention to function attributes and led to higher purchase intention. However, with experience products, e-consumers' attention and purchase intention did not vary across framing messages. Referring to information asymmetry theory and signal theory, we posit that the cognitive effort involved in summary review information is high for search products and low for experience products since summary reviews are a more useful signal in reducing information asymmetry for search products than for experience products. The theoretical and practical implications are also discussed.

Close

  • doi:10.1007/s10660-021-09491-y

Close

Rebecca L. Johnson; Devika Nambiar; Gabriella Suman

Using eye-movements to assess underlying factors in online purchasing behaviors Journal Article

In: International Journal of Consumer Studies, pp. 1–16, 2021.

Abstract | Links | BibTeX

@article{Johnson2021,
title = {Using eye-movements to assess underlying factors in online purchasing behaviors},
author = {Rebecca L. Johnson and Devika Nambiar and Gabriella Suman},
doi = {10.1111/ijcs.12762},
year = {2021},
date = {2021-01-01},
journal = {International Journal of Consumer Studies},
pages = {1--16},
abstract = {The field of consumer neuroscience allows researchers to account for an individual's explicit reported behaviors as well as their implicit behaviors that are reflected in the neural mechanisms that occur during the purchase decision phase of a consumer's online shopping experience. The purpose of the current study was to use eye-tracking technology in conjunction with self-report purchase intention data to observe the relative impact of star rating, price, discount, and time pressure on purchase decisions. The results suggest that purchase intention was most affected by star rating, price, and discount with higher purchase intentions on items with higher star ratings, lower prices, and greater discounts. The eye movement data revealed that these factors, as well as time pressure, influenced where consumers directed their attention in making their purchasing decisions. These findings have significant implications for future ecommerce marketing strategy, especially across efforts to increase purchase intention.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The field of consumer neuroscience allows researchers to account for an individual's explicit reported behaviors as well as their implicit behaviors that are reflected in the neural mechanisms that occur during the purchase decision phase of a consumer's online shopping experience. The purpose of the current study was to use eye-tracking technology in conjunction with self-report purchase intention data to observe the relative impact of star rating, price, discount, and time pressure on purchase decisions. The results suggest that purchase intention was most affected by star rating, price, and discount with higher purchase intentions on items with higher star ratings, lower prices, and greater discounts. The eye movement data revealed that these factors, as well as time pressure, influenced where consumers directed their attention in making their purchasing decisions. These findings have significant implications for future ecommerce marketing strategy, especially across efforts to increase purchase intention.

Close

  • doi:10.1111/ijcs.12762

Close

Miguel A. Lago; Aditya Jonnalagadda; Craig K. Abbey; Bruno B. Barufaldi; Predrag R. Bakic; Andrew D. A. Maidment; Winifred K. Leung; Susan P. Weinstein; Brian S. Englander; Miguel P. Eckstein

Under-exploration of three-dimensional images leads to search errors for small salient targets Journal Article

In: Current Biology, vol. 31, no. 5, pp. 1099–1106, 2021.

Abstract | Links | BibTeX

@article{Lago2021,
title = {Under-exploration of three-dimensional images leads to search errors for small salient targets},
author = {Miguel A. Lago and Aditya Jonnalagadda and Craig K. Abbey and Bruno B. Barufaldi and Predrag R. Bakic and Andrew D. A. Maidment and Winifred K. Leung and Susan P. Weinstein and Brian S. Englander and Miguel P. Eckstein},
doi = {10.1016/j.cub.2020.12.029},
year = {2021},
date = {2021-01-01},
journal = {Current Biology},
volume = {31},
number = {5},
pages = {1099--1106},
publisher = {Elsevier Ltd.},
abstract = {Advances in 3D imaging technology are transforming how radiologists search for cancer1,2 and how security officers scrutinize baggage for dangerous objects.3 These new 3D technologies often improve search over 2D images4,5 but vastly increase the image data. Here, we investigate 3D search for targets of various sizes in filtered noise and digital breast phantoms. For a Bayesian ideal observer optimally processing the filtered noise and a convolutional neural network processing the digital breast phantoms, search with 3D image stacks increases target information and improves accuracy over search with 2D images. In contrast, 3D search by humans leads to high miss rates for small targets easily detected in 2D search, but not for larger targets more visible in the visual periphery. Analyses of human eye movements, perceptual judgments, and a computational model with a foveated visual system suggest that human errors can be explained by interaction among a target's peripheral visibility, eye movement under-exploration of the 3D images, and a perceived overestimation of the explored area. Instructing observers to extend the search reduces 75% of the small target misses without increasing false positives. Results with twelve radiologists confirm that even medical professionals reading realistic breast phantoms have high miss rates for small targets in 3D search. Thus, under-exploration represents a fundamental limitation to the efficacy with which humans search in 3D image stacks and miss targets with these prevalent image technologies. Will 3D imaging technologies always lead to improvements for the visual search of targets? Lago et al. show that, when humans search 3D image stacks, they under-explore with eye movements, overestimate the area they have searched, and often miss small targets that are salient in 2D images.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Advances in 3D imaging technology are transforming how radiologists search for cancer1,2 and how security officers scrutinize baggage for dangerous objects.3 These new 3D technologies often improve search over 2D images4,5 but vastly increase the image data. Here, we investigate 3D search for targets of various sizes in filtered noise and digital breast phantoms. For a Bayesian ideal observer optimally processing the filtered noise and a convolutional neural network processing the digital breast phantoms, search with 3D image stacks increases target information and improves accuracy over search with 2D images. In contrast, 3D search by humans leads to high miss rates for small targets easily detected in 2D search, but not for larger targets more visible in the visual periphery. Analyses of human eye movements, perceptual judgments, and a computational model with a foveated visual system suggest that human errors can be explained by interaction among a target's peripheral visibility, eye movement under-exploration of the 3D images, and a perceived overestimation of the explored area. Instructing observers to extend the search reduces 75% of the small target misses without increasing false positives. Results with twelve radiologists confirm that even medical professionals reading realistic breast phantoms have high miss rates for small targets in 3D search. Thus, under-exploration represents a fundamental limitation to the efficacy with which humans search in 3D image stacks and miss targets with these prevalent image technologies. Will 3D imaging technologies always lead to improvements for the visual search of targets? Lago et al. show that, when humans search 3D image stacks, they under-explore with eye movements, overestimate the area they have searched, and often miss small targets that are salient in 2D images.

Close

  • doi:10.1016/j.cub.2020.12.029

Close

Wei Li; Yushi Jiang; Miao Miao; Qing Yan; Fan He

Image congruence and visual object structure of anthropomorphic advertisement-eye movement research based on self-construct Journal Article

In: Journal of Contemporary Marketing Science, vol. 4, no. 2, pp. 260–279, 2021.

Abstract | Links | BibTeX

@article{Li2021g,
title = {Image congruence and visual object structure of anthropomorphic advertisement-eye movement research based on self-construct},
author = {Wei Li and Yushi Jiang and Miao Miao and Qing Yan and Fan He},
doi = {10.1108/jcmars-07-2021-0027},
year = {2021},
date = {2021-01-01},
journal = {Journal of Contemporary Marketing Science},
volume = {4},
number = {2},
pages = {260--279},
abstract = {Purpose – Enterprises often use anthropomorphic images to display products. In this study, by discussing the differences of the anthropomorphic images of juxtaposition and fusion, the authors can distinguish the boundary conditions of the influence of different visual object structures on consumers' attention. Design/methodology/approach – Based on schema theory and information processing theory and using eye movement methods, this study analyzed the attractiveness of anthropomorphic images to consumers under different congruence levels through experiments of 2 (congruence: high and low) *2(visual object structure: juxtaposition and fusion)*2(self-construct: interdependent and independent). This study discusses the difference in the attractiveness of interdependent and independent consumers in the context of high congruence, juxtaposition and fusion of two visual object structures. Findings – The results show that compared with the low congruence anthropomorphic image, the high congruence anthropomorphic image can attract more attention of consumers. In the case of low compatibility of anthropomorphic images, the juxtaposition structure of anthropomorphic images is more attractive to consumers than the fusion structure. In the case of high compatibility of anthropomorphic images, for independent self-consumers, the attraction of fusion structure image is higher than the juxtaposition image, and for interdependent self-consumers, the attraction of juxtaposition image is higher than the fusion image. Originality/value – The conclusion enriches the anthropomorphic marketing theory. It reveals different degrees of attention paid to anthropomorphic image by consumers of different types of self-construct. Eye movement methods provide a new perspective for the study of anthropomorphic marketing and provide a reference for enterprises to publicize products or services through anthropomorphic image.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose – Enterprises often use anthropomorphic images to display products. In this study, by discussing the differences of the anthropomorphic images of juxtaposition and fusion, the authors can distinguish the boundary conditions of the influence of different visual object structures on consumers' attention. Design/methodology/approach – Based on schema theory and information processing theory and using eye movement methods, this study analyzed the attractiveness of anthropomorphic images to consumers under different congruence levels through experiments of 2 (congruence: high and low) *2(visual object structure: juxtaposition and fusion)*2(self-construct: interdependent and independent). This study discusses the difference in the attractiveness of interdependent and independent consumers in the context of high congruence, juxtaposition and fusion of two visual object structures. Findings – The results show that compared with the low congruence anthropomorphic image, the high congruence anthropomorphic image can attract more attention of consumers. In the case of low compatibility of anthropomorphic images, the juxtaposition structure of anthropomorphic images is more attractive to consumers than the fusion structure. In the case of high compatibility of anthropomorphic images, for independent self-consumers, the attraction of fusion structure image is higher than the juxtaposition image, and for interdependent self-consumers, the attraction of juxtaposition image is higher than the fusion image. Originality/value – The conclusion enriches the anthropomorphic marketing theory. It reveals different degrees of attention paid to anthropomorphic image by consumers of different types of self-construct. Eye movement methods provide a new perspective for the study of anthropomorphic marketing and provide a reference for enterprises to publicize products or services through anthropomorphic image.

Close

  • doi:10.1108/jcmars-07-2021-0027

Close

Song Liang; Ruihang Liu; Jiansheng Qian

Fixation prediction for advertising images: Dataset and benchmark Journal Article

In: Journal of Visual Communication and Image Representation, vol. 81, pp. 103356, 2021.

Abstract | Links | BibTeX

@article{Liang2021b,
title = {Fixation prediction for advertising images: Dataset and benchmark},
author = {Song Liang and Ruihang Liu and Jiansheng Qian},
doi = {10.1016/j.jvcir.2021.103356},
year = {2021},
date = {2021-01-01},
journal = {Journal of Visual Communication and Image Representation},
volume = {81},
pages = {103356},
publisher = {Elsevier Inc.},
abstract = {Existing saliency prediction methods focus on exploring a universal saliency model for natural images, relatively few on advertising images which typically consists of both textual regions and pictorial regions. To fill this gap, we first build an advertising image database, named ADD1000, recording 57 subjects' eye movement data of 1000 ad images. Compared to natural images, advertising images contain more artificial scenarios and show stronger persuasiveness and deliberateness, while the impact of this scene heterogeneity on visual attention is rarely studied. Moreover, text elements and picture elements express closely related semantic information to highlight product or brand in ad images, while their respective contribution to visual attention is also less known. Motivated by these, we further propose a saliency prediction model for advertising images based on text enhanced learning (TEL-SP), which comprehensively considers the interplay between textual region and pictorial region. Extensive experiments on the ADD1000 database show that the proposed model outperforms existing state-of-the-art methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Existing saliency prediction methods focus on exploring a universal saliency model for natural images, relatively few on advertising images which typically consists of both textual regions and pictorial regions. To fill this gap, we first build an advertising image database, named ADD1000, recording 57 subjects' eye movement data of 1000 ad images. Compared to natural images, advertising images contain more artificial scenarios and show stronger persuasiveness and deliberateness, while the impact of this scene heterogeneity on visual attention is rarely studied. Moreover, text elements and picture elements express closely related semantic information to highlight product or brand in ad images, while their respective contribution to visual attention is also less known. Motivated by these, we further propose a saliency prediction model for advertising images based on text enhanced learning (TEL-SP), which comprehensively considers the interplay between textual region and pictorial region. Extensive experiments on the ADD1000 database show that the proposed model outperforms existing state-of-the-art methods.

Close

  • doi:10.1016/j.jvcir.2021.103356

Close

Sixin Liao; Lili Yu; Erik D. Reichle; Jan Louis Kruger

Using eye movements to study the reading of subtitles in video Journal Article

In: Scientific Studies of Reading, vol. 25, no. 5, pp. 417–435, 2021.

Abstract | Links | BibTeX

@article{Liao2021,
title = {Using eye movements to study the reading of subtitles in video},
author = {Sixin Liao and Lili Yu and Erik D. Reichle and Jan Louis Kruger},
doi = {10.1080/10888438.2020.1823986},
year = {2021},
date = {2021-01-01},
journal = {Scientific Studies of Reading},
volume = {25},
number = {5},
pages = {417--435},
publisher = {Routledge},
abstract = {This article reports the first eye-movement experiment to examine how the presence versus absence of concurrent video content and presentation speed affect the reading of subtitles. Results indicated that participants adapted their visual routines to examine video content while simultaneously prioritizing the reading of subtitles, especially when the latter was displayed only briefly. Although decisions about when and where to move the eyes largely remained under local (cognitive) control, this control was also modulated by global task demands, suggesting an integration of local and global eye-movement control. The theoretical and pedagogical implications of these findings are discussed, and we also briefly describe a new theoretical framework for understanding all forms of multimodal reading, including the reading of subtitles in video.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article reports the first eye-movement experiment to examine how the presence versus absence of concurrent video content and presentation speed affect the reading of subtitles. Results indicated that participants adapted their visual routines to examine video content while simultaneously prioritizing the reading of subtitles, especially when the latter was displayed only briefly. Although decisions about when and where to move the eyes largely remained under local (cognitive) control, this control was also modulated by global task demands, suggesting an integration of local and global eye-movement control. The theoretical and pedagogical implications of these findings are discussed, and we also briefly describe a new theoretical framework for understanding all forms of multimodal reading, including the reading of subtitles in video.

Close

  • doi:10.1080/10888438.2020.1823986

Close

Qunyue Liu; Zhipeng Zhu; Xianjun Zeng; Zhixiong Zhuo; Baojian Ye; Lei Fang; Qitang Huang; Pengcheng Lai

The impact of landscape complexity on preference ratings and eye fixation of various urban green space settings Journal Article

In: Urban Forestry and Urban Greening, vol. 66, pp. 127411, 2021.

Abstract | Links | BibTeX

@article{Liu2021d,
title = {The impact of landscape complexity on preference ratings and eye fixation of various urban green space settings},
author = {Qunyue Liu and Zhipeng Zhu and Xianjun Zeng and Zhixiong Zhuo and Baojian Ye and Lei Fang and Qitang Huang and Pengcheng Lai},
doi = {10.1016/j.ufug.2021.127411},
year = {2021},
date = {2021-01-01},
journal = {Urban Forestry and Urban Greening},
volume = {66},
pages = {127411},
publisher = {Elsevier GmbH},
abstract = {People's perceptions on landscapes are important in the design process, and are closely associated with viewing behavior. However, little is known about the perceived landscape complexity of different urban green space settings in relation to people's preference and eye movements. This study, therefore, investigated the influence of landscape complexity on preference ratings and eye fixation of lawn, path, plaza, and waterfront settings of urban green spaces. Six images for each type of setting were selected as stimuli and further classified into three categories based on the participants' mean ratings of landscape complexity. Forty valid responses were obtained. The results indicated that participants' ratings of landscape complexity and preference were positively correlated in all types of settings. There were significant differences in fixation count and average fixation duration between images with different levels of landscape complexity in lawn and waterscape settings. Fixation count was positively correlated with landscape complexity level in all lawn, plaza and waterscape setting images. Moreover, average fixation duration was negatively correlated with landscape complexity level in all lawn and waterscape setting images. Preference ratings had no definite relationships with fixation counts and average fixation duration. The findings of this study will help designers and urban park managers to effectively incorporate public perceptions in design and decision-making process. In addition, it provides new insights into the relationship between eye movements and landscape complexity and sheds some light on the application of eye tracking technology in landscape perception studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

People's perceptions on landscapes are important in the design process, and are closely associated with viewing behavior. However, little is known about the perceived landscape complexity of different urban green space settings in relation to people's preference and eye movements. This study, therefore, investigated the influence of landscape complexity on preference ratings and eye fixation of lawn, path, plaza, and waterfront settings of urban green spaces. Six images for each type of setting were selected as stimuli and further classified into three categories based on the participants' mean ratings of landscape complexity. Forty valid responses were obtained. The results indicated that participants' ratings of landscape complexity and preference were positively correlated in all types of settings. There were significant differences in fixation count and average fixation duration between images with different levels of landscape complexity in lawn and waterscape settings. Fixation count was positively correlated with landscape complexity level in all lawn, plaza and waterscape setting images. Moreover, average fixation duration was negatively correlated with landscape complexity level in all lawn and waterscape setting images. Preference ratings had no definite relationships with fixation counts and average fixation duration. The findings of this study will help designers and urban park managers to effectively incorporate public perceptions in design and decision-making process. In addition, it provides new insights into the relationship between eye movements and landscape complexity and sheds some light on the application of eye tracking technology in landscape perception studies.

Close

  • doi:10.1016/j.ufug.2021.127411

Close

Beatriz Martín-Luengo; Andriy Myachykov; Yury Shtyrov

Deliberative process in sharing information with different audiences: Eye-tracking correlates Journal Article

In: Quarterly Journal of Experimental Psychology, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{MartinLuengo2021,
title = {Deliberative process in sharing information with different audiences: Eye-tracking correlates},
author = {Beatriz Martín-Luengo and Andriy Myachykov and Yury Shtyrov},
doi = {10.1177/17470218211047437},
year = {2021},
date = {2021-01-01},
journal = {Quarterly Journal of Experimental Psychology},
pages = {1--12},
abstract = {Research on conversational pragmatics demonstrates how interlocutors tailor the information they share depending on the audience. Previous research showed that, in informal contexts, speakers often provide several alternative answers, whereas in formal contexts, they tend to give only a single answer; however, the psychological underpinnings of these effects remain obscure. To investigate this answer selection process, we measured participants' eye movements in different experimentally modelled social contexts. Participants answered general knowledge questions by providing responses with either single (one) or plural (three) alternatives. Then, a formal (job interview) or informal (conversation with friends) context was presented and participants decided either to report or withdraw their responses after considering the given social context. Growth curve analysis on the eye movements indicates that the selected response option attracted more eye movements. There was a discrepancy between the answer selection likelihood and the proportion of fixations to the corresponding option—but only in the formal context. These findings support a more elaborate decision-making processes in formal contexts. They also suggest that eye movements do not necessarily accompany the options considered in the decision-making processes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research on conversational pragmatics demonstrates how interlocutors tailor the information they share depending on the audience. Previous research showed that, in informal contexts, speakers often provide several alternative answers, whereas in formal contexts, they tend to give only a single answer; however, the psychological underpinnings of these effects remain obscure. To investigate this answer selection process, we measured participants' eye movements in different experimentally modelled social contexts. Participants answered general knowledge questions by providing responses with either single (one) or plural (three) alternatives. Then, a formal (job interview) or informal (conversation with friends) context was presented and participants decided either to report or withdraw their responses after considering the given social context. Growth curve analysis on the eye movements indicates that the selected response option attracted more eye movements. There was a discrepancy between the answer selection likelihood and the proportion of fixations to the corresponding option—but only in the formal context. These findings support a more elaborate decision-making processes in formal contexts. They also suggest that eye movements do not necessarily accompany the options considered in the decision-making processes.

Close

  • doi:10.1177/17470218211047437

Close

Alexandre Milisavljevic; Fabrice Abate; Thomas Le Bras; Bernard Gosselin; Matei Mancas; Karine Doré-Mazars

Similarities and differences between eye and mouse dynamics during web pages exploration Journal Article

In: Frontiers in Psychology, vol. 12, pp. 554595, 2021.

Abstract | Links | BibTeX

@article{Milisavljevic2021,
title = {Similarities and differences between eye and mouse dynamics during web pages exploration},
author = {Alexandre Milisavljevic and Fabrice Abate and Thomas Le Bras and Bernard Gosselin and Matei Mancas and Karine Doré-Mazars},
doi = {10.3389/fpsyg.2021.554595},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Psychology},
volume = {12},
pages = {554595},
abstract = {The study of eye movements is a common way to non-invasively understand and analyze human behavior. However, eye-tracking techniques are very hard to scale, and require expensive equipment and extensive expertise. In the context of web browsing, these issues could be overcome by studying the link between the eye and the computer mouse. Here, we propose new analysis methods, and a more advanced characterization of this link. To this end, we recorded the eye, mouse, and scroll movements of 151 participants exploring 18 dynamic web pages while performing free viewing and visual search tasks for 20 s. The data revealed significant differences of eye, mouse, and scroll parameters over time which stabilize at the end of exploration. This suggests the existence of a task-independent relationship between eye, mouse, and scroll parameters, which are characterized by two distinct patterns: one common pattern for movement parameters and a second for dwelling/fixation parameters. Within these patterns, mouse and eye movements remained consistent with each other, while the scrolling behaved the opposite way.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The study of eye movements is a common way to non-invasively understand and analyze human behavior. However, eye-tracking techniques are very hard to scale, and require expensive equipment and extensive expertise. In the context of web browsing, these issues could be overcome by studying the link between the eye and the computer mouse. Here, we propose new analysis methods, and a more advanced characterization of this link. To this end, we recorded the eye, mouse, and scroll movements of 151 participants exploring 18 dynamic web pages while performing free viewing and visual search tasks for 20 s. The data revealed significant differences of eye, mouse, and scroll parameters over time which stabilize at the end of exploration. This suggests the existence of a task-independent relationship between eye, mouse, and scroll parameters, which are characterized by two distinct patterns: one common pattern for movement parameters and a second for dwelling/fixation parameters. Within these patterns, mouse and eye movements remained consistent with each other, while the scrolling behaved the opposite way.

Close

  • doi:10.3389/fpsyg.2021.554595

Close

Tarikere T. Niranjan; Narendra K. Ghosalya; Srinagesh Gavirneni

Crying wolf and a knowing wink: A behavioral study of order inflation and discounting in supply chains Journal Article

In: Production and Operations Management, pp. 1–18, 2021.

Abstract | Links | BibTeX

@article{Niranjan2021,
title = {Crying wolf and a knowing wink: A behavioral study of order inflation and discounting in supply chains},
author = {Tarikere T. Niranjan and Narendra K. Ghosalya and Srinagesh Gavirneni},
doi = {10.1111/poms.13595},
year = {2021},
date = {2021-01-01},
journal = {Production and Operations Management},
pages = {1--18},
abstract = {Two field case studies uncover information discounting in supply chains, which manifests in the form of schedule padding and order inflation. Buyers often exaggerate the urgency and quantity of their needs (“crying wolf”). However, both buyers and suppliers eliminate old, inflated orders from their behavioral ordering/supply system implicitly (“knowing wink”) even though they exist in the hyper-rational ordering system represented by the ERP system. This behavior results in (and from) low credibility of information exchanged between the buyers and suppliers and their subsequent actions, and settles in a suboptimal equilibrium. Eye tracking experiments based on these case studies unpack the psychophysiological mechanisms behind this behavior, specifically, how decision makers consider past UnFilled Orders under different experimental conditions. We find that merely improving the supplier behavior does not help: information discounting reduces only when we sensitize the buyer to the improvement in the supplier behavior. However, this comes with no financial performance improvements; performance improves by further educating the buyers of the optimal target inventory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Two field case studies uncover information discounting in supply chains, which manifests in the form of schedule padding and order inflation. Buyers often exaggerate the urgency and quantity of their needs (“crying wolf”). However, both buyers and suppliers eliminate old, inflated orders from their behavioral ordering/supply system implicitly (“knowing wink”) even though they exist in the hyper-rational ordering system represented by the ERP system. This behavior results in (and from) low credibility of information exchanged between the buyers and suppliers and their subsequent actions, and settles in a suboptimal equilibrium. Eye tracking experiments based on these case studies unpack the psychophysiological mechanisms behind this behavior, specifically, how decision makers consider past UnFilled Orders under different experimental conditions. We find that merely improving the supplier behavior does not help: information discounting reduces only when we sensitize the buyer to the improvement in the supplier behavior. However, this comes with no financial performance improvements; performance improves by further educating the buyers of the optimal target inventory.

Close

  • doi:10.1111/poms.13595

Close

Katrina Oselinsky; Ashlie Johnson; Pamela Lundeberg; Abby Johnson Holm; Megan Mueller; Dan J. Graham

GMO food labels do not affect college student food selection, despite negative attitudes towards GMOs Journal Article

In: International Journal of Environmental Research and Public Health, vol. 18, no. 4, pp. 1761, 2021.

Abstract | Links | BibTeX

@article{Oselinsky2021,
title = {GMO food labels do not affect college student food selection, despite negative attitudes towards GMOs},
author = {Katrina Oselinsky and Ashlie Johnson and Pamela Lundeberg and Abby Johnson Holm and Megan Mueller and Dan J. Graham},
doi = {10.3390/ijerph18041761},
year = {2021},
date = {2021-01-01},
journal = {International Journal of Environmental Research and Public Health},
volume = {18},
number = {4},
pages = {1761},
abstract = {US Public Law 114–216 dictates that food producers in the United States of America will be required to label foods containing genetically modified organisms (GMOs) starting in 2022; how-ever, there is little empirical evidence demonstrating how U.S. consumers would use food labels that indicate the presence or absence of GMOs. The aim of this two-phase study was to determine how attitudes towards GMOs relate to food choices and how labels indicating the presence or absence of GMOs differentially impact choices among college students—the age group which values transparent food labeling more than any other. Participants (n = 434) made yes/no choices for each of 64 foods. In both phases of the study, participants were randomly assigned to seeing GMO Free labels, contains GMOs labels, or no GMO labels. Across the two phases, 85% of participants reported believing that GMOs were at least somewhat dangerous to health (42% believed GMOs to be dan-gerous), yet in both studies, although eye-tracking data verified that participants attended to the GMO labels, these labels did not significantly affect food choices. Although college consumers may believe GMOs to be dangerous, their food choices do not reflect this belief.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

US Public Law 114–216 dictates that food producers in the United States of America will be required to label foods containing genetically modified organisms (GMOs) starting in 2022; how-ever, there is little empirical evidence demonstrating how U.S. consumers would use food labels that indicate the presence or absence of GMOs. The aim of this two-phase study was to determine how attitudes towards GMOs relate to food choices and how labels indicating the presence or absence of GMOs differentially impact choices among college students—the age group which values transparent food labeling more than any other. Participants (n = 434) made yes/no choices for each of 64 foods. In both phases of the study, participants were randomly assigned to seeing GMO Free labels, contains GMOs labels, or no GMO labels. Across the two phases, 85% of participants reported believing that GMOs were at least somewhat dangerous to health (42% believed GMOs to be dan-gerous), yet in both studies, although eye-tracking data verified that participants attended to the GMO labels, these labels did not significantly affect food choices. Although college consumers may believe GMOs to be dangerous, their food choices do not reflect this belief.

Close

  • doi:10.3390/ijerph18041761

Close

Nadia Paraskevoudi; John S. Pezaris

Full gaze contingency provides better reading performance than head steering alone in a simulation of prosthetic vision Journal Article

In: Scientific Reports, vol. 11, pp. 11121, 2021.

Abstract | Links | BibTeX

@article{Paraskevoudi2021,
title = {Full gaze contingency provides better reading performance than head steering alone in a simulation of prosthetic vision},
author = {Nadia Paraskevoudi and John S. Pezaris},
doi = {10.1038/s41598-021-86996-4},
year = {2021},
date = {2021-01-01},
journal = {Scientific Reports},
volume = {11},
pages = {11121},
publisher = {Nature Publishing Group UK},
abstract = {The visual pathway is retinotopically organized and sensitive to gaze position, leading us to hypothesize that subjects using visual prostheses incorporating eye position would perform better on perceptual tasks than with devices that are merely head-steered. We had sighted subjects read sentences from the MNREAD corpus through a simulation of artificial vision under conditions of full gaze compensation, and head-steered viewing. With 2000 simulated phosphenes, subjects (n = 23) were immediately able to read under full gaze compensation and were assessed at an equivalent visual acuity of 1.0 logMAR, but were nearly unable to perform the task under head-steered viewing. At the largest font size tested, 1.4 logMAR, subjects read at 59 WPM (50% of normal speed) with 100% accuracy under the full-gaze condition, but at 0.7 WPM (under 1% of normal) with below 15% accuracy under head-steering. We conclude that gaze-compensated prostheses are likely to produce considerably better patient outcomes than those not incorporating eye movements.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The visual pathway is retinotopically organized and sensitive to gaze position, leading us to hypothesize that subjects using visual prostheses incorporating eye position would perform better on perceptual tasks than with devices that are merely head-steered. We had sighted subjects read sentences from the MNREAD corpus through a simulation of artificial vision under conditions of full gaze compensation, and head-steered viewing. With 2000 simulated phosphenes, subjects (n = 23) were immediately able to read under full gaze compensation and were assessed at an equivalent visual acuity of 1.0 logMAR, but were nearly unable to perform the task under head-steered viewing. At the largest font size tested, 1.4 logMAR, subjects read at 59 WPM (50% of normal speed) with 100% accuracy under the full-gaze condition, but at 0.7 WPM (under 1% of normal) with below 15% accuracy under head-steering. We conclude that gaze-compensated prostheses are likely to produce considerably better patient outcomes than those not incorporating eye movements.

Close

  • doi:10.1038/s41598-021-86996-4

Close

Suhyun Park; Louis Wiliams; Rebecca Chamberlain

Global saccadic eye movements characterise artists' visual attention while drawing Journal Article

In: Empirical Studies of the Arts, pp. 1–17, 2021.

Abstract | Links | BibTeX

@article{Park2021b,
title = {Global saccadic eye movements characterise artists' visual attention while drawing},
author = {Suhyun Park and Louis Wiliams and Rebecca Chamberlain},
doi = {10.1177/02762374211001811},
year = {2021},
date = {2021-01-01},
journal = {Empirical Studies of the Arts},
pages = {1--17},
abstract = {Previous research has shown that artists employ flexible attentional strategies during offline perceptual tasks. The current study explored visual processing online, by tracking the eye movements of artists and non-artists (n=65) while they produced representational drawings of photographic stimuli. The findings revealed that it is possible to differentiate artists from non-artists on the basis of the relative amount of global-to-local saccadic eye movements they make when looking at the target stimulus while drawing, but not in a preparatory free viewing phase. Results indicated that these differences in eye movements are not specifically related to representational drawing ability, and may be a feature of artistic ability more broadly. This eye movement analysis technique may be used in future research to characterise the dynamics of attentional shifts in eye movements while artists are carrying out a range of artistic tasks.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research has shown that artists employ flexible attentional strategies during offline perceptual tasks. The current study explored visual processing online, by tracking the eye movements of artists and non-artists (n=65) while they produced representational drawings of photographic stimuli. The findings revealed that it is possible to differentiate artists from non-artists on the basis of the relative amount of global-to-local saccadic eye movements they make when looking at the target stimulus while drawing, but not in a preparatory free viewing phase. Results indicated that these differences in eye movements are not specifically related to representational drawing ability, and may be a feature of artistic ability more broadly. This eye movement analysis technique may be used in future research to characterise the dynamics of attentional shifts in eye movements while artists are carrying out a range of artistic tasks.

Close

  • doi:10.1177/02762374211001811

Close

Gordy Pleyers; Nicolas Vermeulen

How does interactivity of online media hamper ad effectiveness Journal Article

In: International Journal of Market Research, vol. 63, no. 3, pp. 335–352, 2021.

Abstract | Links | BibTeX

@article{Pleyers2021,
title = {How does interactivity of online media hamper ad effectiveness},
author = {Gordy Pleyers and Nicolas Vermeulen},
doi = {10.1177/1470785319867640},
year = {2021},
date = {2021-01-01},
journal = {International Journal of Market Research},
volume = {63},
number = {3},
pages = {335--352},
abstract = {The development of the Internet has increasingly led to advertisements presented on rich and interactive websites offering users a high level of control over the contents they are exposed to—sometimes to the extent of allowing them to skip “unwanted” ads preceding the desired content. While previous studies have shown that such interactivity and control can positively impact users' subjective experience and attitude toward the advertisements, the present study examined their impact on users' attention to the ad (using eye-tracking) and actual ad effectiveness (ad memory). It relied on an experimental design allowing for comparing the effectiveness of similar ads that were presented by realistic interfaces simulating common types of online media (in addition to “traditional television” as a form of passive baseline comparison condition). The interfaces consisted of a news website (including many stimuli surrounding the ads and an “ad countdown timer,” that might detract users' attention from the ads) and YouTube (also including the “skip ad” option). Ad memory correlated positively (negatively) with gaze direction to the ad area (outside the ad area) and was particularly low when users had the opportunity to stop the ad after a few seconds. These results emphasize the scale of ad effectiveness decrease that may occur when the media interfaces offer users easy ways of avoiding video ads by gazing toward surrounding stimuli and by skipping the ads. The implications of these findings for advertisers are addressed, and it is suggested that future studies on the topic should include other measures of ad effectiveness and other distracting factors that might further detract users from online ad video content in real-life contexts.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The development of the Internet has increasingly led to advertisements presented on rich and interactive websites offering users a high level of control over the contents they are exposed to—sometimes to the extent of allowing them to skip “unwanted” ads preceding the desired content. While previous studies have shown that such interactivity and control can positively impact users' subjective experience and attitude toward the advertisements, the present study examined their impact on users' attention to the ad (using eye-tracking) and actual ad effectiveness (ad memory). It relied on an experimental design allowing for comparing the effectiveness of similar ads that were presented by realistic interfaces simulating common types of online media (in addition to “traditional television” as a form of passive baseline comparison condition). The interfaces consisted of a news website (including many stimuli surrounding the ads and an “ad countdown timer,” that might detract users' attention from the ads) and YouTube (also including the “skip ad” option). Ad memory correlated positively (negatively) with gaze direction to the ad area (outside the ad area) and was particularly low when users had the opportunity to stop the ad after a few seconds. These results emphasize the scale of ad effectiveness decrease that may occur when the media interfaces offer users easy ways of avoiding video ads by gazing toward surrounding stimuli and by skipping the ads. The implications of these findings for advertisers are addressed, and it is suggested that future studies on the topic should include other measures of ad effectiveness and other distracting factors that might further detract users from online ad video content in real-life contexts.

Close

  • doi:10.1177/1470785319867640

Close

Francesca Ales; Luciano Giromini; Lara Warmelink; Megan Polden; Thomas Wilcockson; Claire Kelly; Christina Winters; Alessandro Zennaro; Trevor Crawford

An eye tracking study on feigned schizophrenia Journal Article

In: Psychological Injury and Law, vol. 14, no. 3, pp. 213–226, 2021.

Abstract | Links | BibTeX

@article{Ales2021,
title = {An eye tracking study on feigned schizophrenia},
author = {Francesca Ales and Luciano Giromini and Lara Warmelink and Megan Polden and Thomas Wilcockson and Claire Kelly and Christina Winters and Alessandro Zennaro and Trevor Crawford},
doi = {10.1007/s12207-021-09421-1},
year = {2021},
date = {2021-01-01},
journal = {Psychological Injury and Law},
volume = {14},
number = {3},
pages = {213--226},
publisher = {Springer US},
abstract = {Research on malingering detection has not yet taken full advantage of eye tracking technology. In particular, while several studies indicate that patients with schizophrenia behave notably differently from controls on specific oculomotor tasks, no study has yet investigated whether experimental participants instructed to feign could reproduce those behaviors, if coached to do so. Due to the automatic nature of eye movements, we anticipated that eye tracking analyses would help detect feigned schizophrenic problems. To test this hypothesis, we recorded the eye movements of 83 adult UK volunteers, and tested whether eye movements of healthy volunteers instructed to feign schizophrenia (n = 43) would differ from those of honest controls (n = 40), while engaging in smooth pursuit and pro- and anti-saccade tasks. Additionally, results from our investigation were also compared against previously published data observed in patients with schizophrenia performing similar oculomotor tasks. Data analysis showed that eye movements of experimental participants instructed to feign (a) only partially differed from those of controls and (b) did not closely resemble those from patients with schizophrenia reported in previously published papers. Taken together, these results suggest that examination of eye movements does have the potential to help detecting feigned schizophrenia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research on malingering detection has not yet taken full advantage of eye tracking technology. In particular, while several studies indicate that patients with schizophrenia behave notably differently from controls on specific oculomotor tasks, no study has yet investigated whether experimental participants instructed to feign could reproduce those behaviors, if coached to do so. Due to the automatic nature of eye movements, we anticipated that eye tracking analyses would help detect feigned schizophrenic problems. To test this hypothesis, we recorded the eye movements of 83 adult UK volunteers, and tested whether eye movements of healthy volunteers instructed to feign schizophrenia (n = 43) would differ from those of honest controls (n = 40), while engaging in smooth pursuit and pro- and anti-saccade tasks. Additionally, results from our investigation were also compared against previously published data observed in patients with schizophrenia performing similar oculomotor tasks. Data analysis showed that eye movements of experimental participants instructed to feign (a) only partially differed from those of controls and (b) did not closely resemble those from patients with schizophrenia reported in previously published papers. Taken together, these results suggest that examination of eye movements does have the potential to help detecting feigned schizophrenia.

Close

  • doi:10.1007/s12207-021-09421-1

Close

Thomas A. Busey; Nicholas Heise; R. Austin Hicklin; Bradford T. Ulery; Jo Ann Buscaglia

Characterizing missed identifications and errors in latent fingerprint comparisons using eye-tracking data Journal Article

In: PLoS ONE, vol. 16, no. 5, pp. e0251674, 2021.

Abstract | Links | BibTeX

@article{Busey2021,
title = {Characterizing missed identifications and errors in latent fingerprint comparisons using eye-tracking data},
author = {Thomas A. Busey and Nicholas Heise and R. Austin Hicklin and Bradford T. Ulery and Jo Ann Buscaglia},
doi = {10.1371/journal.pone.0251674},
year = {2021},
date = {2021-01-01},
journal = {PLoS ONE},
volume = {16},
number = {5},
pages = {e0251674},
abstract = {Latent fingerprint examiners sometimes come to different conclusions when comparing fingerprints, and eye-gaze behavior may help explain these outcomes. missed identifications (missed IDs) are inconclusive, exclusion, or No Value determinations reached when the consensus of other examiners is an identification. To determine the relation between examiner behavior and missed IDs, we collected eye-gaze data from 121 latent print examiners as they completed a total 1444 difficult (latent-exemplar) comparisons. We extracted metrics from the gaze data that serve as proxies for underlying perceptual and cognitive capacities. We used these metrics to characterize potential mechanisms of missed IDs: Cursory Comparison and Mislocalization. We find that missed IDs are associated with shorter comparison times, fewer regions visited, and fewer attempted correspondences between the compared images. Latent print comparisons resulting in erroneous exclusions (a subset of missed IDs) are also more likely to have fixations in different regions and less accurate correspondence attempts than those comparisons resulting in identifications. We also use our derived metrics to describe one atypical examiner who made six erroneous identifications, four of which were on comparisons intended to be straightforward exclusions. The present work helps identify the degree to which missed IDs can be explained using eye-gaze behavior, and the extent to which missed IDs depend on cognitive and decision-making factors outside the domain of eye-tracking methodologies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Latent fingerprint examiners sometimes come to different conclusions when comparing fingerprints, and eye-gaze behavior may help explain these outcomes. missed identifications (missed IDs) are inconclusive, exclusion, or No Value determinations reached when the consensus of other examiners is an identification. To determine the relation between examiner behavior and missed IDs, we collected eye-gaze data from 121 latent print examiners as they completed a total 1444 difficult (latent-exemplar) comparisons. We extracted metrics from the gaze data that serve as proxies for underlying perceptual and cognitive capacities. We used these metrics to characterize potential mechanisms of missed IDs: Cursory Comparison and Mislocalization. We find that missed IDs are associated with shorter comparison times, fewer regions visited, and fewer attempted correspondences between the compared images. Latent print comparisons resulting in erroneous exclusions (a subset of missed IDs) are also more likely to have fixations in different regions and less accurate correspondence attempts than those comparisons resulting in identifications. We also use our derived metrics to describe one atypical examiner who made six erroneous identifications, four of which were on comparisons intended to be straightforward exclusions. The present work helps identify the degree to which missed IDs can be explained using eye-gaze behavior, and the extent to which missed IDs depend on cognitive and decision-making factors outside the domain of eye-tracking methodologies.

Close

  • doi:10.1371/journal.pone.0251674

Close

Matthew R. Cavanaugh; Lisa M. Blanchard; Michael McDermott; Byron L. Lam; Madhura Tamhankar; Steven E. Feldon

Efficacy of visual retraining in the hemianopic field after stroke: Results of a randomized clinical trial Journal Article

In: Ophthalmology, vol. 128, no. 7, pp. 1091–1101, 2021.

Abstract | Links | BibTeX

@article{Cavanaugh2021,
title = {Efficacy of visual retraining in the hemianopic field after stroke: Results of a randomized clinical trial},
author = {Matthew R. Cavanaugh and Lisa M. Blanchard and Michael McDermott and Byron L. Lam and Madhura Tamhankar and Steven E. Feldon},
doi = {10.1016/j.ophtha.2020.11.020},
year = {2021},
date = {2021-01-01},
journal = {Ophthalmology},
volume = {128},
number = {7},
pages = {1091--1101},
publisher = {Elsevier Inc},
abstract = {Purpose: To evaluate the efficacy of motion discrimination training as a potential therapy for stroke-induced hemianopic visual field defects. Design: Clinical trial. Participants: Forty-eight patients with stroke-induced homonymous hemianopia (HH) were randomized into 2 training arms: intervention and control. Patients were between 21 and 75 years of age and showed no ocular issues at presentation. Methods: Patients were trained on a motion discrimination task previously evidenced to reduce visual field deficits, but not in a randomized clinical trial. Patients were randomized with equal allocation to receive training in either their sighted or deficit visual fields. Training was performed at home for 6 months, consisting of repeated visual discriminations at a single location for 20 to 30 minutes daily. Study staff and patients were masked to training type. Testing before and after training was identical, consisting of Humphrey visual fields (Carl Zeiss Meditech), macular integrity assessment perimetry, OCT, motion discrimination performance, and visual quality-of-life questionnaires. Main Outcome Measures: Primary outcome measures were changes in perimetric mean deviation (PMD) on Humphrey Visual Field Analyzer in both eyes. Results: Mean PMDs improved over 6 months in deficit-trained patients (mean change in the right eye, 0.58 dB; 95% confidence interval, 0.07–1.08 dB; mean change in the left eye 0.84 dB; 95% confidence interval, 0.22–1.47 dB). No improvement was observed in sighted-trained patients (mean change in the right eye, 0.12 dB; 95% confidence interval, –0.38 to 0.62 dB; mean change in the left eye, 0.10 dB; 95% confidence interval, –0.52 to 0.72 dB). However, no significant differences were found between the alternative training methods (right eye},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: To evaluate the efficacy of motion discrimination training as a potential therapy for stroke-induced hemianopic visual field defects. Design: Clinical trial. Participants: Forty-eight patients with stroke-induced homonymous hemianopia (HH) were randomized into 2 training arms: intervention and control. Patients were between 21 and 75 years of age and showed no ocular issues at presentation. Methods: Patients were trained on a motion discrimination task previously evidenced to reduce visual field deficits, but not in a randomized clinical trial. Patients were randomized with equal allocation to receive training in either their sighted or deficit visual fields. Training was performed at home for 6 months, consisting of repeated visual discriminations at a single location for 20 to 30 minutes daily. Study staff and patients were masked to training type. Testing before and after training was identical, consisting of Humphrey visual fields (Carl Zeiss Meditech), macular integrity assessment perimetry, OCT, motion discrimination performance, and visual quality-of-life questionnaires. Main Outcome Measures: Primary outcome measures were changes in perimetric mean deviation (PMD) on Humphrey Visual Field Analyzer in both eyes. Results: Mean PMDs improved over 6 months in deficit-trained patients (mean change in the right eye, 0.58 dB; 95% confidence interval, 0.07–1.08 dB; mean change in the left eye 0.84 dB; 95% confidence interval, 0.22–1.47 dB). No improvement was observed in sighted-trained patients (mean change in the right eye, 0.12 dB; 95% confidence interval, –0.38 to 0.62 dB; mean change in the left eye, 0.10 dB; 95% confidence interval, –0.52 to 0.72 dB). However, no significant differences were found between the alternative training methods (right eye

Close

  • doi:10.1016/j.ophtha.2020.11.020

Close

Ho Chen-En

What does professional experience have to offer? An eyetracking study of sight interpreting/translation behaviour Journal Article

In: Translation, Cognition and Behavior, vol. 4, no. 1, pp. 47–74, 2021.

Abstract | Links | BibTeX

@article{ChenEn2021,
title = {What does professional experience have to offer? An eyetracking study of sight interpreting/translation behaviour},
author = {Ho Chen-En},
doi = {10.1075/tcb.00047.ho},
year = {2021},
date = {2021-01-01},
journal = {Translation, Cognition and Behavior},
volume = {4},
number = {1},
pages = {47--74},
abstract = {This study investigated the impact of professional experience on the process and product of sight interpreting/translation (SiT). Seventeen experienced interpreters, with at least 150 days' professional experience, and 18 interpreting students were recruited to conduct three tasks: silent reading, reading aloud, and SiT. All participants had similar interpreter training backgrounds. The data of the SiT task are reported here, with two experienced interpreters (both AIIC members) assessing the participants' interpretations on accuracy and style, which includes fluency and other paralinguistic performance. The findings show that professional experience contributed to higher accuracy, although there was no between-group difference in the mean score on style, overall task time, length of the SiT output, and mean fixation duration of each stage of reading. The experienced practitioners exhibited more varied approaches at the beginning of the SiT task, with some biding their time longer than the others before oral production started, but quality was not affected. Moving along, the practitioners showed better language flexibility in that their renditions were faster, steadier, and less disrupted by pauses and the need to read further to maintain the flow of interpretation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigated the impact of professional experience on the process and product of sight interpreting/translation (SiT). Seventeen experienced interpreters, with at least 150 days' professional experience, and 18 interpreting students were recruited to conduct three tasks: silent reading, reading aloud, and SiT. All participants had similar interpreter training backgrounds. The data of the SiT task are reported here, with two experienced interpreters (both AIIC members) assessing the participants' interpretations on accuracy and style, which includes fluency and other paralinguistic performance. The findings show that professional experience contributed to higher accuracy, although there was no between-group difference in the mean score on style, overall task time, length of the SiT output, and mean fixation duration of each stage of reading. The experienced practitioners exhibited more varied approaches at the beginning of the SiT task, with some biding their time longer than the others before oral production started, but quality was not affected. Moving along, the practitioners showed better language flexibility in that their renditions were faster, steadier, and less disrupted by pauses and the need to read further to maintain the flow of interpretation.

Close

  • doi:10.1075/tcb.00047.ho

Close

Francisco M. Costela; Stephanie M. Reeves; Russell L. Woods

An implementation of Bubble Magnification did not improve the video comprehension of individuals with central vision loss Journal Article

In: Ophthalmic and Physiological Optics, vol. 41, no. 4, pp. 842–852, 2021.

Abstract | Links | BibTeX

@article{Costela2021,
title = {An implementation of Bubble Magnification did not improve the video comprehension of individuals with central vision loss},
author = {Francisco M. Costela and Stephanie M. Reeves and Russell L. Woods},
doi = {10.1111/opo.12797},
year = {2021},
date = {2021-01-01},
journal = {Ophthalmic and Physiological Optics},
volume = {41},
number = {4},
pages = {842--852},
abstract = {Purpose: People with central vision loss (CVL) watch television, videos and movies, but often report difficulty and have reduced video comprehension. An approach to assist viewing videos is electronic magnification of the video itself, such as Bubble Magnification. Methods: We created a Bubble Magnification technique that displayed a magnified segment around the centre of interest (COI) as determined by the gaze of participants with normal vision. The 15 participants with CVL viewed video clips shown with 2× and 3× Bubble Magnification, and unedited. We measured video comprehension and gaze coherence. Results: Video comprehension was significantly worse with both 2× (p = 0.01) and 3× Bubble Magnification (p < 0.001) than the unedited video. There was no difference in gaze coherence across conditions (p ≥ 0.58). This was unexpected because we expected a benefit in both video comprehension and gaze coherence. This initial attempt to implement the Bubble Magnification method had flaws that probably reduced its effectiveness. Conclusions: In the future, we propose alternative implementations of Bubble Magnification, such as variable magnification and bubble size. This study is a first step in the development of an intelligent-magnification approach to providing a vision rehabilitation aid to assist people with CVL.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: People with central vision loss (CVL) watch television, videos and movies, but often report difficulty and have reduced video comprehension. An approach to assist viewing videos is electronic magnification of the video itself, such as Bubble Magnification. Methods: We created a Bubble Magnification technique that displayed a magnified segment around the centre of interest (COI) as determined by the gaze of participants with normal vision. The 15 participants with CVL viewed video clips shown with 2× and 3× Bubble Magnification, and unedited. We measured video comprehension and gaze coherence. Results: Video comprehension was significantly worse with both 2× (p = 0.01) and 3× Bubble Magnification (p < 0.001) than the unedited video. There was no difference in gaze coherence across conditions (p ≥ 0.58). This was unexpected because we expected a benefit in both video comprehension and gaze coherence. This initial attempt to implement the Bubble Magnification method had flaws that probably reduced its effectiveness. Conclusions: In the future, we propose alternative implementations of Bubble Magnification, such as variable magnification and bubble size. This study is a first step in the development of an intelligent-magnification approach to providing a vision rehabilitation aid to assist people with CVL.

Close

  • doi:10.1111/opo.12797

Close

Francisco M. Costela; Stephanie M. Reeves; Russell L. Woods

The effect of zoom magnification and large display on video comprehension in individuals with central vision loss Journal Article

In: Translational Vision Science and Technology, vol. 10, no. 8, pp. 30, 2021.

Abstract | Links | BibTeX

@article{Costela2021a,
title = {The effect of zoom magnification and large display on video comprehension in individuals with central vision loss},
author = {Francisco M. Costela and Stephanie M. Reeves and Russell L. Woods},
doi = {10.1167/tvst.10.8.30},
year = {2021},
date = {2021-01-01},
journal = {Translational Vision Science and Technology},
volume = {10},
number = {8},
pages = {30},
abstract = {Purpose: A larger display at the same viewing distance provides relative-size magnification for individuals with central vision loss (CVL). However, the resulting large visible area of the display is expected to result in more head rotation, which may cause discom-fort. We created a zoom magnification technique that placed the center of interest (COI) in the center of the display to reduce the need for head rotation. Methods: In a 2 × 2 within-subject study design, 23 participants with CVL viewed video clips from 1.5 m (4.9 feet) shown with or without zoom magnification, and with a large (208 cm/82” diagonal, 69°) or a typical (84 cm/33”, 31°) screen. Head position was tracked and a custom questionnaire was used to measure discomfort. Results: Video comprehension was better with the large screen (P < 0.001) and slightly worse with zoom magnification (P = 0.03). Oddly, head movements did not vary with screen size (P = 0.63), yet were greater with zoom magnification (P = 0.001). This finding was unexpected, because the COI remains in the center with zoom magnification, but moves widely with a large screen and no magnification. Conclusions: This initial attempt to implement the zoom magnification method had flaws that may have decreased its effectiveness. In the future, we propose alternative implementations for zoom magnification, such as variable magnification. Translational Relevance: We present the first explicit demonstration that relative-size magnification improves the video comprehension of people with CVL when viewing video.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: A larger display at the same viewing distance provides relative-size magnification for individuals with central vision loss (CVL). However, the resulting large visible area of the display is expected to result in more head rotation, which may cause discom-fort. We created a zoom magnification technique that placed the center of interest (COI) in the center of the display to reduce the need for head rotation. Methods: In a 2 × 2 within-subject study design, 23 participants with CVL viewed video clips from 1.5 m (4.9 feet) shown with or without zoom magnification, and with a large (208 cm/82” diagonal, 69°) or a typical (84 cm/33”, 31°) screen. Head position was tracked and a custom questionnaire was used to measure discomfort. Results: Video comprehension was better with the large screen (P < 0.001) and slightly worse with zoom magnification (P = 0.03). Oddly, head movements did not vary with screen size (P = 0.63), yet were greater with zoom magnification (P = 0.001). This finding was unexpected, because the COI remains in the center with zoom magnification, but moves widely with a large screen and no magnification. Conclusions: This initial attempt to implement the zoom magnification method had flaws that may have decreased its effectiveness. In the future, we propose alternative implementations for zoom magnification, such as variable magnification. Translational Relevance: We present the first explicit demonstration that relative-size magnification improves the video comprehension of people with CVL when viewing video.

Close

  • doi:10.1167/tvst.10.8.30

Close

Jessica Dawson; Tom Foulsham

Your turn to speak? Audiovisual social attention in the lab and in the wild Journal Article

In: Visual Cognition, pp. 1–19, 2021.

Abstract | Links | BibTeX

@article{Dawson2021,
title = {Your turn to speak? Audiovisual social attention in the lab and in the wild},
author = {Jessica Dawson and Tom Foulsham},
doi = {10.1080/13506285.2021.1958038},
year = {2021},
date = {2021-01-01},
journal = {Visual Cognition},
pages = {1--19},
publisher = {Taylor & Francis},
abstract = {In everyday group conversations, we must decide whom to pay attention to and when. This process of dynamic social attention is important for goals both perceptual and social. The present study investigated gaze during a conversation in a realistic group and in a controlled laboratory study where third-party observers watched videos of the same group. In both contexts, we explore how gaze allocation is related to turn-taking in speech. Experimental video clips were edited to either remove the sound, freeze the video, or transition to a blank screen, allowing us to determine how shifts in attention between speakers depend on visual or auditory cues. Gaze behaviour in the real, interactive situation was similar to the fixations made by observers watching a video. Eyetracked participants often fixated the person speaking and shifted gaze in response to changes in speaker, even when sound was removed or the video freeze-framed. These findings suggest we sometimes fixate the location of speakers even when no additional visual information can be gained. Our novel approach offers both a comparison of interactive and third-party viewing and the opportunity for controlled experimental manipulations. This delivers a rich understanding of gaze behaviour and multimodal attention during a conversation following.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In everyday group conversations, we must decide whom to pay attention to and when. This process of dynamic social attention is important for goals both perceptual and social. The present study investigated gaze during a conversation in a realistic group and in a controlled laboratory study where third-party observers watched videos of the same group. In both contexts, we explore how gaze allocation is related to turn-taking in speech. Experimental video clips were edited to either remove the sound, freeze the video, or transition to a blank screen, allowing us to determine how shifts in attention between speakers depend on visual or auditory cues. Gaze behaviour in the real, interactive situation was similar to the fixations made by observers watching a video. Eyetracked participants often fixated the person speaking and shifted gaze in response to changes in speaker, even when sound was removed or the video freeze-framed. These findings suggest we sometimes fixate the location of speakers even when no additional visual information can be gained. Our novel approach offers both a comparison of interactive and third-party viewing and the opportunity for controlled experimental manipulations. This delivers a rich understanding of gaze behaviour and multimodal attention during a conversation following.

Close

  • doi:10.1080/13506285.2021.1958038

Close

Y. B. Eisma; A. Reiff; L. Kooijman; D. Dodou; J. C. F. Winter

External human-machine interfaces: Effects of message perspective Journal Article

In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 78, pp. 30–41, 2021.

Abstract | Links | BibTeX

@article{Eisma2021,
title = {External human-machine interfaces: Effects of message perspective},
author = {Y. B. Eisma and A. Reiff and L. Kooijman and D. Dodou and J. C. F. Winter},
doi = {10.1016/j.trf.2021.01.013},
year = {2021},
date = {2021-01-01},
journal = {Transportation Research Part F: Traffic Psychology and Behaviour},
volume = {78},
pages = {30--41},
abstract = {Future automated vehicles may be equipped with external Human-Machine Interfaces (eHMIs). Currently, little is known about the effect of the perspective of the eHMI message on crossing decisions of pedestrians. We performed an experiment to examine the effects of images depicting eHMI messages of different perspectives (egocentric from the pedestrian's point of view: WALK, DON'T WALK, allocentric: BRAKING, DRIVING, and ambiguous: GO, STOP) on participants' (N = 103) crossing decisions, response times, and eye movements. Considering that crossing the road can be cognitively demanding, we added a memory task in two-thirds of the trials. The results showed that egocentric messages yielded higher subjective clarity ratings than the other messages as well as higher objective clarity scores (i.e., more uniform crossing decisions) and faster response times than the allocentric BRAKING and the ambiguous STOP. When participants were subjected to the memory task, pupil diameter increased, and crossing decisions were reached faster as compared to trials without memory task. Regarding the ambiguous messages, most participants crossed for the GO message and did not cross for the STOP message, which points towards an egocentric perspective taken by the participant. More lengthy text messages (e.g., DON'T WALK) yielded a higher number of saccades but did not cause slower response times. We conclude that pedestrians find egocentric eHMI messages clearer than allocentric ones, and take an egocentric perspective if the message is ambiguous. Our results may have important implications, as the consensus among eHMI researchers appears to be that egocentric text-based eHMIs should not be used in traffic.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Future automated vehicles may be equipped with external Human-Machine Interfaces (eHMIs). Currently, little is known about the effect of the perspective of the eHMI message on crossing decisions of pedestrians. We performed an experiment to examine the effects of images depicting eHMI messages of different perspectives (egocentric from the pedestrian's point of view: WALK, DON'T WALK, allocentric: BRAKING, DRIVING, and ambiguous: GO, STOP) on participants' (N = 103) crossing decisions, response times, and eye movements. Considering that crossing the road can be cognitively demanding, we added a memory task in two-thirds of the trials. The results showed that egocentric messages yielded higher subjective clarity ratings than the other messages as well as higher objective clarity scores (i.e., more uniform crossing decisions) and faster response times than the allocentric BRAKING and the ambiguous STOP. When participants were subjected to the memory task, pupil diameter increased, and crossing decisions were reached faster as compared to trials without memory task. Regarding the ambiguous messages, most participants crossed for the GO message and did not cross for the STOP message, which points towards an egocentric perspective taken by the participant. More lengthy text messages (e.g., DON'T WALK) yielded a higher number of saccades but did not cause slower response times. We conclude that pedestrians find egocentric eHMI messages clearer than allocentric ones, and take an egocentric perspective if the message is ambiguous. Our results may have important implications, as the consensus among eHMI researchers appears to be that egocentric text-based eHMIs should not be used in traffic.

Close

  • doi:10.1016/j.trf.2021.01.013

Close

Yke Bauke Eisma; Clark Borst; René Paassen; Joost Winter

Augmented visual feedback: Cure or distraction? Journal Article

In: Human Factors, vol. 63, no. 7, pp. 1156–1168, 2021.

Abstract | Links | BibTeX

@article{Eisma2021a,
title = {Augmented visual feedback: Cure or distraction?},
author = {Yke Bauke Eisma and Clark Borst and René Paassen and Joost Winter},
doi = {10.1177/0018720820924602},
year = {2021},
date = {2021-01-01},
journal = {Human Factors},
volume = {63},
number = {7},
pages = {1156--1168},
abstract = {Objective: The aim of the study was to investigate the effect of augmented feedback on participants' workload, performance, and distribution of visual attention. Background: An important question in human–machine interface design is whether the operator should be provided with direct solutions. We focused on the solution space diagram (SSD), a type of augmented feedback that shows directly whether two aircraft are on conflicting trajectories. Method: One group of novices (n = 13) completed conflict detection tasks with SSD, whereas a second group (n = 11) performed the same tasks without SSD. Eye-tracking was used to measure visual attention distribution. Results: The mean self-reported task difficulty was substantially lower for the SSD group compared to the No-SSD group. The SSD group had a better conflict detection rate than the No-SSD group, whereas false-positive rates were equivalent. High false-positive rates for some scenarios were attributed to participants who misunderstood the SSD. Compared to the No-SSD group, the SSD group spent a large proportion of their time looking at the SSD aircraft while looking less at other areas of interest. Conclusion: Augmented feedback makes the task subjectively easier but has side effects related to visual tunneling and misunderstanding. Application: Caution should be exercised when human operators are expected to reproduce task solutions that are provided by augmented visual feedback.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: The aim of the study was to investigate the effect of augmented feedback on participants' workload, performance, and distribution of visual attention. Background: An important question in human–machine interface design is whether the operator should be provided with direct solutions. We focused on the solution space diagram (SSD), a type of augmented feedback that shows directly whether two aircraft are on conflicting trajectories. Method: One group of novices (n = 13) completed conflict detection tasks with SSD, whereas a second group (n = 11) performed the same tasks without SSD. Eye-tracking was used to measure visual attention distribution. Results: The mean self-reported task difficulty was substantially lower for the SSD group compared to the No-SSD group. The SSD group had a better conflict detection rate than the No-SSD group, whereas false-positive rates were equivalent. High false-positive rates for some scenarios were attributed to participants who misunderstood the SSD. Compared to the No-SSD group, the SSD group spent a large proportion of their time looking at the SSD aircraft while looking less at other areas of interest. Conclusion: Augmented feedback makes the task subjectively easier but has side effects related to visual tunneling and misunderstanding. Application: Caution should be exercised when human operators are expected to reproduce task solutions that are provided by augmented visual feedback.

Close

  • doi:10.1177/0018720820924602

Close

Iain Fraser; Kelvin Balcombe; Louis Williams; Eugene McSorley

Preference stability in discrete choice experiments. Some evidence using eye-tracking Journal Article

In: Journal of Behavioral and Experimental Economics, vol. 94, pp. 101753, 2021.

Abstract | Links | BibTeX

@article{Fraser2021,
title = {Preference stability in discrete choice experiments. Some evidence using eye-tracking},
author = {Iain Fraser and Kelvin Balcombe and Louis Williams and Eugene McSorley},
doi = {10.1016/j.socec.2021.101753},
year = {2021},
date = {2021-01-01},
journal = {Journal of Behavioral and Experimental Economics},
volume = {94},
pages = {101753},
publisher = {Elsevier Inc.},
abstract = {We investigate the relationship between the extent of visual attention and preference stability in a discrete choice experiment using eye-tracking to investigate country of origin information for meat in the UK. By preference stability, we mean the extent to which choice task responses differ for an identical set of tasks for an individual. Our results reveal that the degree of visual attention, counter to our initial expectations, is positively related to the degree of preference instability. This means that preference instability does not necessarily indicate low levels of respondent engagement. We also find that those respondents' exhibiting preference instability do not substantively differ from the rest of the sample in terms of their underlying preferences. Rather, these respondents spend longer looking at tasks that are similar in terms of utility, suggesting these respondents find these choices more difficult.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We investigate the relationship between the extent of visual attention and preference stability in a discrete choice experiment using eye-tracking to investigate country of origin information for meat in the UK. By preference stability, we mean the extent to which choice task responses differ for an identical set of tasks for an individual. Our results reveal that the degree of visual attention, counter to our initial expectations, is positively related to the degree of preference instability. This means that preference instability does not necessarily indicate low levels of respondent engagement. We also find that those respondents' exhibiting preference instability do not substantively differ from the rest of the sample in terms of their underlying preferences. Rather, these respondents spend longer looking at tasks that are similar in terms of utility, suggesting these respondents find these choices more difficult.

Close

  • doi:10.1016/j.socec.2021.101753

Close

Agostino Gibaldi; Silvio P. Sabatini

The saccade main sequence revised: A fast and repeatable tool for oculomotor analysis Journal Article

In: Behavior Research Methods, vol. 53, no. 1, pp. 167–187, 2021.

Abstract | Links | BibTeX

@article{Gibaldi2021,
title = {The saccade main sequence revised: A fast and repeatable tool for oculomotor analysis},
author = {Agostino Gibaldi and Silvio P. Sabatini},
doi = {10.3758/s13428-020-01388-2},
year = {2021},
date = {2021-01-01},
journal = {Behavior Research Methods},
volume = {53},
number = {1},
pages = {167--187},
publisher = {Behavior Research Methods},
abstract = {Saccades are rapid ballistic eye movements that humans make to direct the fovea to an object of interest. Their kinematics is well defined, showing regular relationships between amplitude, duration, and velocity: the saccadic 'main sequence'. Deviations of eye movements from the main sequence can be used as markers of specific neurological disorders. Despite its significance, there is no general methodological consensus for reliable and repeatable measurements of the main sequence. In this work, we propose a novel approach for standard indicators of oculomotor performance. The obtained measurements are characterized by high repeatability, allowing for fine assessments of inter- and intra-subject variability, and inter-ocular differences. The designed experimental procedure is natural and non-fatiguing, thus it is well suited for fragile or non-collaborative subjects like neurological patients and infants. The method has been released as a software toolbox for public use. This framework lays the foundation for a normative dataset of healthy oculomotor performance for the assessment of oculomotor dysfunctions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Saccades are rapid ballistic eye movements that humans make to direct the fovea to an object of interest. Their kinematics is well defined, showing regular relationships between amplitude, duration, and velocity: the saccadic 'main sequence'. Deviations of eye movements from the main sequence can be used as markers of specific neurological disorders. Despite its significance, there is no general methodological consensus for reliable and repeatable measurements of the main sequence. In this work, we propose a novel approach for standard indicators of oculomotor performance. The obtained measurements are characterized by high repeatability, allowing for fine assessments of inter- and intra-subject variability, and inter-ocular differences. The designed experimental procedure is natural and non-fatiguing, thus it is well suited for fragile or non-collaborative subjects like neurological patients and infants. The method has been released as a software toolbox for public use. This framework lays the foundation for a normative dataset of healthy oculomotor performance for the assessment of oculomotor dysfunctions.

Close

  • doi:10.3758/s13428-020-01388-2

Close

Hao Gong; Scott S. Hsieh; David R. Holmes; David A. Cook; Akitoshi Inoue; David J. Bartlett; Francis Baffour; Hiroaki Takahashi; Shuai Leng; Lifeng Yu; Cynthia H. McCollough; Joel G. Fletcher

An interactive eye-tracking system for measuring radiologists' visual fixations in volumetric CT images: Implementation and initial eye-tracking accuracy validation Journal Article

In: Medical Physics, vol. 48, no. 11, pp. 6710–6723, 2021.

Abstract | Links | BibTeX

@article{Gong2021,
title = {An interactive eye-tracking system for measuring radiologists' visual fixations in volumetric CT images: Implementation and initial eye-tracking accuracy validation},
author = {Hao Gong and Scott S. Hsieh and David R. Holmes and David A. Cook and Akitoshi Inoue and David J. Bartlett and Francis Baffour and Hiroaki Takahashi and Shuai Leng and Lifeng Yu and Cynthia H. McCollough and Joel G. Fletcher},
doi = {10.1002/mp.15219},
year = {2021},
date = {2021-01-01},
journal = {Medical Physics},
volume = {48},
number = {11},
pages = {6710--6723},
abstract = {Purpose: Eye-tracking approaches have been used to understand the visual search process in radiology. However, previous eye-tracking work in computer tomography (CT) has been limited largely to single cross-sectional images or video playback of the reconstructed volume, which do not accurately reflect radiologists' visual search activities and their interactivity with three-dimensional image data at a computer workstation (e.g., scroll, pan, and zoom) for visual evaluation of diagnostic imaging targets. We have developed a platform that integrates eye-tracking hardware with in-house-developed reader workstation software to allow monitoring of the visual search process and reader-image interactions in clinically relevant reader tasks. The purpose of this work is to validate the spatial accuracy of eye-tracking data using this platform for different eye-tracking data acquisition modes. Methods: An eye-tracker was integrated with a previously developed workstation designed for reader performance studies. The integrated system captured real-time eye movement and workstation events at 1000 Hz sampling frequency. The eye-tracker was operated either in head-stabilized mode or in free-movement mode. In head-stabilized mode, the reader positioned their head on a manufacturer-provided chinrest. In free-movement mode, a biofeedback tool emitted an audio cue when the head position was outside the data collection range (general biofeedback) or outside a narrower range of positions near the calibration position (strict biofeedback). Four radiologists and one resident were invited to participate in three studies to determine eye-tracking spatial accuracy under three constraint conditions: head-stabilized mode (i.e., with use of a chin rest), free movement with general biofeedback, and free movement with strict biofeedback. Study 1 evaluated the impact of head stabilization versus general or strict biofeedback using a cross-hair target prior to the integration of the eye-tracker with the image viewing workstation. In Study 2, after integration of the eye-tracker and reader workstation, readers were asked to fixate on targets that were randomly distributed within a volumetric digital phantom. In Study 3, readers used the integrated system to scroll through volumetric patient CT angiographic images while fixating on the centerline of designated blood vessels (from the left coronary artery to dorsalis pedis artery). Spatial accuracy was quantified as the offset between the center of the intended target and the detected fixation using units of image pixels and the degree of visual angle. Results: The three head position constraint conditions yielded comparable accuracy in the studies using digital phantoms. For Study 1 involving the digital crosshairs, the median ± the standard deviation of offset values among readers were 15.2 ± 7.0 image pixels with the chinrest, 14.2 ± 3.6 image pixels with strict biofeedback, and 19.1 ± 6.5 image pixels with general biofeedback. For Study 2 using the random dot phantom, the median ± standard deviation offset values were 16.7 ± 28.8 pixels with use of a chinrest, 16.5 ± 24.6 pixels using strict biofeedback, and 18.0 ± 22.4 pixels using general biofeedback, which translated to a visual angle of about 0.8° for all three conditions. We found no obvious association between eye-tracking accuracy and target size or view time. In Study 3 viewing patient images, use of the chinrest and strict biofeedback demonstrated comparable accuracy, while the use of general biofeedback demonstrated a slightly worse accuracy. The median ± standard deviation of offset values were 14.8 ± 11.4 pixels with use of a chinrest, 21.0 ± 16.2 pixels using strict biofeedback, and 29.7 ± 20.9 image pixels using general biofeedback. These corresponded to visual angles ranging from 0.7° to 1.3°. Conclusions: An integrated eye-tracker system to assess reader eye movement and interactive viewing in relation to imaging targets demonstrated reasonable spatial accuracy for assessment of visual fixation. The head-free movement condition with audio biofeedback performed similarly to head-stabilized mode.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: Eye-tracking approaches have been used to understand the visual search process in radiology. However, previous eye-tracking work in computer tomography (CT) has been limited largely to single cross-sectional images or video playback of the reconstructed volume, which do not accurately reflect radiologists' visual search activities and their interactivity with three-dimensional image data at a computer workstation (e.g., scroll, pan, and zoom) for visual evaluation of diagnostic imaging targets. We have developed a platform that integrates eye-tracking hardware with in-house-developed reader workstation software to allow monitoring of the visual search process and reader-image interactions in clinically relevant reader tasks. The purpose of this work is to validate the spatial accuracy of eye-tracking data using this platform for different eye-tracking data acquisition modes. Methods: An eye-tracker was integrated with a previously developed workstation designed for reader performance studies. The integrated system captured real-time eye movement and workstation events at 1000 Hz sampling frequency. The eye-tracker was operated either in head-stabilized mode or in free-movement mode. In head-stabilized mode, the reader positioned their head on a manufacturer-provided chinrest. In free-movement mode, a biofeedback tool emitted an audio cue when the head position was outside the data collection range (general biofeedback) or outside a narrower range of positions near the calibration position (strict biofeedback). Four radiologists and one resident were invited to participate in three studies to determine eye-tracking spatial accuracy under three constraint conditions: head-stabilized mode (i.e., with use of a chin rest), free movement with general biofeedback, and free movement with strict biofeedback. Study 1 evaluated the impact of head stabilization versus general or strict biofeedback using a cross-hair target prior to the integration of the eye-tracker with the image viewing workstation. In Study 2, after integration of the eye-tracker and reader workstation, readers were asked to fixate on targets that were randomly distributed within a volumetric digital phantom. In Study 3, readers used the integrated system to scroll through volumetric patient CT angiographic images while fixating on the centerline of designated blood vessels (from the left coronary artery to dorsalis pedis artery). Spatial accuracy was quantified as the offset between the center of the intended target and the detected fixation using units of image pixels and the degree of visual angle. Results: The three head position constraint conditions yielded comparable accuracy in the studies using digital phantoms. For Study 1 involving the digital crosshairs, the median ± the standard deviation of offset values among readers were 15.2 ± 7.0 image pixels with the chinrest, 14.2 ± 3.6 image pixels with strict biofeedback, and 19.1 ± 6.5 image pixels with general biofeedback. For Study 2 using the random dot phantom, the median ± standard deviation offset values were 16.7 ± 28.8 pixels with use of a chinrest, 16.5 ± 24.6 pixels using strict biofeedback, and 18.0 ± 22.4 pixels using general biofeedback, which translated to a visual angle of about 0.8° for all three conditions. We found no obvious association between eye-tracking accuracy and target size or view time. In Study 3 viewing patient images, use of the chinrest and strict biofeedback demonstrated comparable accuracy, while the use of general biofeedback demonstrated a slightly worse accuracy. The median ± standard deviation of offset values were 14.8 ± 11.4 pixels with use of a chinrest, 21.0 ± 16.2 pixels using strict biofeedback, and 29.7 ± 20.9 image pixels using general biofeedback. These corresponded to visual angles ranging from 0.7° to 1.3°. Conclusions: An integrated eye-tracker system to assess reader eye movement and interactive viewing in relation to imaging targets demonstrated reasonable spatial accuracy for assessment of visual fixation. The head-free movement condition with audio biofeedback performed similarly to head-stabilized mode.

Close

  • doi:10.1002/mp.15219

Close

Carlos Sillero-Rejon; Ute Leonards; Marcus R. Munafò; Craig Hedge; Janet Hoek; Benjamin Toll; Harry Gove; Isabel Willis; Rose Barry; Abi Robinson; Olivia M. Maynard

Avoidance of tobacco health warnings? An eye-tracking approach Journal Article

In: Addiction, vol. 116, no. 1, pp. 126–138, 2021.

Abstract | Links | BibTeX

@article{SilleroRejon2021,
title = {Avoidance of tobacco health warnings? An eye-tracking approach},
author = {Carlos Sillero-Rejon and Ute Leonards and Marcus R. Munafò and Craig Hedge and Janet Hoek and Benjamin Toll and Harry Gove and Isabel Willis and Rose Barry and Abi Robinson and Olivia M. Maynard},
doi = {10.1111/add.15148},
year = {2021},
date = {2021-01-01},
journal = {Addiction},
volume = {116},
number = {1},
pages = {126--138},
abstract = {Aims: Among three eye-tracking studies, we examined how cigarette pack features affected visual attention and self-reported avoidance of and reactance to warnings. Design: Study 1: smoking status × warning immediacy (short-term versus long-term health consequences) × warning location (top versus bottom of pack). Study 2: smoking status × warning framing (gain-framed versus loss-framed) × warning format (text-only versus pictorial). Study 3: smoking status × warning severity (highly severe versus moderately severe consequences of smoking). Setting: University of Bristol, UK, eye-tracking laboratory. Participants: Study 1: non-smokers (n = 25), weekly smokers (n = 25) and daily smokers (n = 25). Study 2: non-smokers (n = 37), smokers contemplating quitting (n = 37) and smokers not contemplating quitting (n = 43). Study 3: non-smokers (n = 27), weekly smokers (n = 26) and daily smokers (n = 26). Measurements: For all studies: visual attention, measured as the ratio of the number of fixations to the warning versus the branding, self-reported predicted avoidance of and reactance to warnings and for study 3, effect of warning on quitting motivation. Findings: Study 1: greater self-reported avoidance [mean difference (MD) = 1.14; 95% confidence interval (CI) = 0.94, 1.35, P < 0.001, $eta$p2 = 0.64] and visual attention (MD = 0.89, 95% CI = 0.09, 1.68},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Aims: Among three eye-tracking studies, we examined how cigarette pack features affected visual attention and self-reported avoidance of and reactance to warnings. Design: Study 1: smoking status × warning immediacy (short-term versus long-term health consequences) × warning location (top versus bottom of pack). Study 2: smoking status × warning framing (gain-framed versus loss-framed) × warning format (text-only versus pictorial). Study 3: smoking status × warning severity (highly severe versus moderately severe consequences of smoking). Setting: University of Bristol, UK, eye-tracking laboratory. Participants: Study 1: non-smokers (n = 25), weekly smokers (n = 25) and daily smokers (n = 25). Study 2: non-smokers (n = 37), smokers contemplating quitting (n = 37) and smokers not contemplating quitting (n = 43). Study 3: non-smokers (n = 27), weekly smokers (n = 26) and daily smokers (n = 26). Measurements: For all studies: visual attention, measured as the ratio of the number of fixations to the warning versus the branding, self-reported predicted avoidance of and reactance to warnings and for study 3, effect of warning on quitting motivation. Findings: Study 1: greater self-reported avoidance [mean difference (MD) = 1.14; 95% confidence interval (CI) = 0.94, 1.35, P < 0.001, $eta$p2 = 0.64] and visual attention (MD = 0.89, 95% CI = 0.09, 1.68

Close

  • doi:10.1111/add.15148

Close

Jia Qiong Xie; Detlef H. Rost; Fu Xing Wang; Jin Liang Wang; Rebecca L. Monk

The association between excessive social media use and distraction: An eye movement tracking study Journal Article

In: Information and Management, vol. 58, no. 2, pp. 1–12, 2021.

Abstract | Links | BibTeX

@article{Xie2021a,
title = {The association between excessive social media use and distraction: An eye movement tracking study},
author = {Jia Qiong Xie and Detlef H. Rost and Fu Xing Wang and Jin Liang Wang and Rebecca L. Monk},
doi = {10.1016/j.im.2020.103415},
year = {2021},
date = {2021-01-01},
journal = {Information and Management},
volume = {58},
number = {2},
pages = {1--12},
publisher = {Elsevier B.V.},
abstract = {Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.

Close

  • doi:10.1016/j.im.2020.103415

Close

David Souto; Olivia Marsh; Claire Hutchinson; Simon Judge; Kevin B. Paterson

Cognitive plasticity induced by gaze-control technology: Gaze-typing improves performance in the antisaccade task Journal Article

In: Computers in Human Behavior, vol. 122, pp. 106831, 2021.

Abstract | Links | BibTeX

@article{Souto2021,
title = {Cognitive plasticity induced by gaze-control technology: Gaze-typing improves performance in the antisaccade task},
author = {David Souto and Olivia Marsh and Claire Hutchinson and Simon Judge and Kevin B. Paterson},
doi = {10.1016/j.chb.2021.106831},
year = {2021},
date = {2021-01-01},
journal = {Computers in Human Behavior},
volume = {122},
pages = {106831},
publisher = {Elsevier Ltd},
abstract = {The last twenty years have seen the development of gaze-controlled computer interfaces for augmentative communication and other assistive technology applications. In many applications, the user needs to look at symbols on a virtual on-screen keyboard and maintain gaze to make a selection. Executive control is essential to learning to use gaze-control, affecting the uptake of the technology. Specifically, the user of a gaze-controlled interface must suppress looking for its own sake, the so-called “Midas touch” problem. In a pre-registered study (https://osf.io/2mak4), we tested whether gaze-typing performance depends on executive control and whether learning-dependent plasticity leads to improved executive control as measured using the antisaccade task. Forty-two university students were recruited as participants. After five 30-min training sessions, we found shorter antisaccade latencies in a gaze-control compared to a mouse-control group, and similar error-rates. Subjective workload ratings were also similar across groups, indicating the task in both groups was matched for difficulty. These findings suggest that executive control contributes to gaze-typing performance and leads to learning-induced plasticity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The last twenty years have seen the development of gaze-controlled computer interfaces for augmentative communication and other assistive technology applications. In many applications, the user needs to look at symbols on a virtual on-screen keyboard and maintain gaze to make a selection. Executive control is essential to learning to use gaze-control, affecting the uptake of the technology. Specifically, the user of a gaze-controlled interface must suppress looking for its own sake, the so-called “Midas touch” problem. In a pre-registered study (https://osf.io/2mak4), we tested whether gaze-typing performance depends on executive control and whether learning-dependent plasticity leads to improved executive control as measured using the antisaccade task. Forty-two university students were recruited as participants. After five 30-min training sessions, we found shorter antisaccade latencies in a gaze-control compared to a mouse-control group, and similar error-rates. Subjective workload ratings were also similar across groups, indicating the task in both groups was matched for difficulty. These findings suggest that executive control contributes to gaze-typing performance and leads to learning-induced plasticity.

Close

  • doi:10.1016/j.chb.2021.106831

Close

Nicole H. Yuen; Fred Tam; Nathan W. Churchill; Tom A. Schweizer; Simon J. Graham

Driving with distraction: Measuring brain activity and oculomotor behavior using fMRI and eye-tracking Journal Article

In: Frontiers in Human Neuroscience, vol. 15, pp. 1–20, 2021.

Abstract | Links | BibTeX

@article{Yuen2021,
title = {Driving with distraction: Measuring brain activity and oculomotor behavior using fMRI and eye-tracking},
author = {Nicole H. Yuen and Fred Tam and Nathan W. Churchill and Tom A. Schweizer and Simon J. Graham},
doi = {10.3389/fnhum.2021.659040},
year = {2021},
date = {2021-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {15},
pages = {1--20},
abstract = {Introduction: Driving motor vehicles is a complex task that depends heavily on how visual stimuli are received and subsequently processed by the brain. The potential impact of distraction on driving performance is well known and poses a safety concern – especially for individuals with cognitive impairments who may be clinically unfit to drive. The present study is the first to combine functional magnetic resonance imaging (fMRI) and eye-tracking during simulated driving with distraction, providing oculomotor metrics to enhance scientific understanding of the brain activity that supports driving performance. Materials and Methods: As initial work, twelve healthy young, right-handed participants performed turns ranging in complexity, including simple right and left turns without oncoming traffic, and left turns with oncoming traffic. Distraction was introduced as an auditory task during straight driving, and during left turns with oncoming traffic. Eye-tracking data were recorded during fMRI to characterize fixations, saccades, pupil diameter and blink rate. Results: Brain activation maps for right turns, left turns without oncoming traffic, left turns with oncoming traffic, and the distraction conditions were largely consistent with previous literature reporting the neural correlates of simulated driving. When the effects of distraction were evaluated for left turns with oncoming traffic, increased activation was observed in areas involved in executive function (e.g., middle and inferior frontal gyri) as well as decreased activation in the posterior brain (e.g., middle and superior occipital gyri). Whereas driving performance remained mostly unchanged (e.g., turn speed, time to turn, collisions), the oculomotor measures showed that distraction resulted in more consistent gaze at oncoming traffic in a small area of the visual scene; less time spent gazing at off-road targets (e.g., speedometer, rear-view mirror); more time spent performing saccadic eye movements; and decreased blink rate. Conclusion: Oculomotor behavior modulated with driving task complexity and distraction in a manner consistent with the brain activation features revealed by fMRI. The results suggest that eye-tracking technology should be included in future fMRI studies of simulated driving behavior in targeted populations, such as the elderly and individuals with cognitive complaints – ultimately toward developing better technology to assess and enhance fitness to drive.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Introduction: Driving motor vehicles is a complex task that depends heavily on how visual stimuli are received and subsequently processed by the brain. The potential impact of distraction on driving performance is well known and poses a safety concern – especially for individuals with cognitive impairments who may be clinically unfit to drive. The present study is the first to combine functional magnetic resonance imaging (fMRI) and eye-tracking during simulated driving with distraction, providing oculomotor metrics to enhance scientific understanding of the brain activity that supports driving performance. Materials and Methods: As initial work, twelve healthy young, right-handed participants performed turns ranging in complexity, including simple right and left turns without oncoming traffic, and left turns with oncoming traffic. Distraction was introduced as an auditory task during straight driving, and during left turns with oncoming traffic. Eye-tracking data were recorded during fMRI to characterize fixations, saccades, pupil diameter and blink rate. Results: Brain activation maps for right turns, left turns without oncoming traffic, left turns with oncoming traffic, and the distraction conditions were largely consistent with previous literature reporting the neural correlates of simulated driving. When the effects of distraction were evaluated for left turns with oncoming traffic, increased activation was observed in areas involved in executive function (e.g., middle and inferior frontal gyri) as well as decreased activation in the posterior brain (e.g., middle and superior occipital gyri). Whereas driving performance remained mostly unchanged (e.g., turn speed, time to turn, collisions), the oculomotor measures showed that distraction resulted in more consistent gaze at oncoming traffic in a small area of the visual scene; less time spent gazing at off-road targets (e.g., speedometer, rear-view mirror); more time spent performing saccadic eye movements; and decreased blink rate. Conclusion: Oculomotor behavior modulated with driving task complexity and distraction in a manner consistent with the brain activation features revealed by fMRI. The results suggest that eye-tracking technology should be included in future fMRI studies of simulated driving behavior in targeted populations, such as the elderly and individuals with cognitive complaints – ultimately toward developing better technology to assess and enhance fitness to drive.

Close

  • doi:10.3389/fnhum.2021.659040

Close

Jennifer Sudkamp; Mateusz Bocian; David Souto

The role of eye movements in perceiving vehicle speed and time-to-arrival at the roadside Journal Article

In: Scientific Reports, vol. 11, pp. 23312, 2021.

Abstract | Links | BibTeX

@article{Sudkamp2021,
title = {The role of eye movements in perceiving vehicle speed and time-to-arrival at the roadside},
author = {Jennifer Sudkamp and Mateusz Bocian and David Souto},
doi = {10.1038/s41598-021-02412-x},
year = {2021},
date = {2021-01-01},
journal = {Scientific Reports},
volume = {11},
pages = {23312},
publisher = {Nature Publishing Group UK},
abstract = {To avoid collisions, pedestrians depend on their ability to perceive and interpret the visual motion of other road users. Eye movements influence motion perception, yet pedestrians' gaze behavior has been little investigated. In the present study, we ask whether observers sample visual information differently when making two types of judgements based on the same virtual road-crossing scenario and to which extent spontaneous gaze behavior affects those judgements. Participants performed in succession a speed and a time-to-arrival two-interval discrimination task on the same simple traffic scenario—a car approaching at a constant speed (varying from 10 to 90 km/h) on a single-lane road. On average, observers were able to discriminate vehicle speeds of around 18 km/h and times-to-arrival of 0.7 s. In both tasks, observers placed their gaze closely towards the center of the vehicle's front plane while pursuing the vehicle. Other areas of the visual scene were sampled infrequently. No differences were found in the average gaze behavior between the two tasks and a pattern classifier (Support Vector Machine), trained on trial-level gaze patterns, failed to reliably classify the task from the spontaneous eye movements it elicited. Saccadic gaze behavior could predict time-to-arrival discrimination performance, demonstrating the relevance of gaze behavior for perceptual sensitivity in road-crossing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

To avoid collisions, pedestrians depend on their ability to perceive and interpret the visual motion of other road users. Eye movements influence motion perception, yet pedestrians' gaze behavior has been little investigated. In the present study, we ask whether observers sample visual information differently when making two types of judgements based on the same virtual road-crossing scenario and to which extent spontaneous gaze behavior affects those judgements. Participants performed in succession a speed and a time-to-arrival two-interval discrimination task on the same simple traffic scenario—a car approaching at a constant speed (varying from 10 to 90 km/h) on a single-lane road. On average, observers were able to discriminate vehicle speeds of around 18 km/h and times-to-arrival of 0.7 s. In both tasks, observers placed their gaze closely towards the center of the vehicle's front plane while pursuing the vehicle. Other areas of the visual scene were sampled infrequently. No differences were found in the average gaze behavior between the two tasks and a pattern classifier (Support Vector Machine), trained on trial-level gaze patterns, failed to reliably classify the task from the spontaneous eye movements it elicited. Saccadic gaze behavior could predict time-to-arrival discrimination performance, demonstrating the relevance of gaze behavior for perceptual sensitivity in road-crossing.

Close

  • doi:10.1038/s41598-021-02412-x

Close

Chaitanya Thammineni; Hemanth Manjunatha; Ehsan T. Esfahani

Selective eye-gaze augmentation to enhance imitation learning in Atari games Journal Article

In: Neural Computing and Applications, 2021.

Abstract | Links | BibTeX

@article{Thammineni2021,
title = {Selective eye-gaze augmentation to enhance imitation learning in Atari games},
author = {Chaitanya Thammineni and Hemanth Manjunatha and Ehsan T. Esfahani},
doi = {10.1007/s00521-021-06367-y},
year = {2021},
date = {2021-01-01},
journal = {Neural Computing and Applications},
publisher = {Springer London},
abstract = {This paper presents the selective use of eye-gaze information in learning human actions in Atari games. Extensive evidence suggests that our eye movements convey a wealth of information about the direction of our attention and mental states and encode the information necessary to complete a task. Based on this evidence, we hypothesize that selective use of eye-gaze, as a clue for attention direction, will enhance the learning from demonstration. For this purpose, we propose a selective eye-gaze augmentation (SEA) network that learns when to use the eye-gaze information. The proposed network architecture consists of three sub-networks: gaze prediction, gating, and action prediction network. Using the prior 4 game frames, a gaze map is predicted by the gaze prediction network, which is used for augmenting the input frame. The gating network will determine whether the predicted gaze map should be used in learning and is fed to the final network to predict the action at the current frame. To validate this approach, we use publicly available Atari Human Eye-Tracking And Demonstration (Atari-HEAD) dataset consists of 20 Atari games with 28 million human demonstrations and 328 million eye-gazes (over game frames) collected from four subjects. We demonstrate the efficacy of selective eye-gaze augmentation compared to the state-of-the-art Attention Guided Imitation Learning (AGIL) and Behavior Cloning (BC). The results indicate that the selective augmentation approach (the SEA network) performs significantly better than the AGIL and BC. Moreover, to demonstrate the significance of selective use of gaze through the gating network, we compare our approach with the random selection of the gaze. Even in this case, the SEA network performs significantly better, validating the advantage of selectively using the gaze in demonstration learning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This paper presents the selective use of eye-gaze information in learning human actions in Atari games. Extensive evidence suggests that our eye movements convey a wealth of information about the direction of our attention and mental states and encode the information necessary to complete a task. Based on this evidence, we hypothesize that selective use of eye-gaze, as a clue for attention direction, will enhance the learning from demonstration. For this purpose, we propose a selective eye-gaze augmentation (SEA) network that learns when to use the eye-gaze information. The proposed network architecture consists of three sub-networks: gaze prediction, gating, and action prediction network. Using the prior 4 game frames, a gaze map is predicted by the gaze prediction network, which is used for augmenting the input frame. The gating network will determine whether the predicted gaze map should be used in learning and is fed to the final network to predict the action at the current frame. To validate this approach, we use publicly available Atari Human Eye-Tracking And Demonstration (Atari-HEAD) dataset consists of 20 Atari games with 28 million human demonstrations and 328 million eye-gazes (over game frames) collected from four subjects. We demonstrate the efficacy of selective eye-gaze augmentation compared to the state-of-the-art Attention Guided Imitation Learning (AGIL) and Behavior Cloning (BC). The results indicate that the selective augmentation approach (the SEA network) performs significantly better than the AGIL and BC. Moreover, to demonstrate the significance of selective use of gaze through the gating network, we compare our approach with the random selection of the gaze. Even in this case, the SEA network performs significantly better, validating the advantage of selectively using the gaze in demonstration learning.

Close

  • doi:10.1007/s00521-021-06367-y

Close

María Silva-Gago; Flora Ioannidou; Annapaola Fedato; Timothy Hodgson; Emiliano Bruner

Visual attention and cognitive archaeology: An eye-tracking study of palaeolithic stone tools Journal Article

In: Perception, pp. 1–22, 2021.

Abstract | Links | BibTeX

@article{SilvaGago2021,
title = {Visual attention and cognitive archaeology: An eye-tracking study of palaeolithic stone tools},
author = {María Silva-Gago and Flora Ioannidou and Annapaola Fedato and Timothy Hodgson and Emiliano Bruner},
doi = {10.1177/03010066211069504},
year = {2021},
date = {2021-01-01},
journal = {Perception},
pages = {1--22},
abstract = {The study of lithic technology can provide information on human cultural evolution. This article aims to analyse visual behaviour associated with the exploration of ancient stone artefacts and how this relates to perceptual mechanisms in humans. In Experiment 1, we used eye tracking to record patterns of eye fixations while participants viewed images of stone tools, including examples of worked pebbles and handaxes. The results showed that the focus of gaze was directed more towards the upper regions of worked pebbles and on the basal areas for handaxes. Knapped surfaces also attracted more fixation than natural cortex for both tool types. Fixation distribution was different to that predicted by models that calculate visual salience. Experiment 2 was an online study using a mouse-click attention tracking technique and included images of unworked pebbles and ‘mixed' images combining the handaxe's outline with the pebble's unworked texture. The pattern of clicks corresponded to that revealed using eye tracking and there were differences between tools and other images. Overall, the findings suggest that visual exploration is directed towards functional aspects of tools. Studies of visual attention and exploration can supply useful information to inform understanding of human cognitive evolution and tool use.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The study of lithic technology can provide information on human cultural evolution. This article aims to analyse visual behaviour associated with the exploration of ancient stone artefacts and how this relates to perceptual mechanisms in humans. In Experiment 1, we used eye tracking to record patterns of eye fixations while participants viewed images of stone tools, including examples of worked pebbles and handaxes. The results showed that the focus of gaze was directed more towards the upper regions of worked pebbles and on the basal areas for handaxes. Knapped surfaces also attracted more fixation than natural cortex for both tool types. Fixation distribution was different to that predicted by models that calculate visual salience. Experiment 2 was an online study using a mouse-click attention tracking technique and included images of unworked pebbles and ‘mixed' images combining the handaxe's outline with the pebble's unworked texture. The pattern of clicks corresponded to that revealed using eye tracking and there were differences between tools and other images. Overall, the findings suggest that visual exploration is directed towards functional aspects of tools. Studies of visual attention and exploration can supply useful information to inform understanding of human cognitive evolution and tool use.

Close

  • doi:10.1177/03010066211069504

Close

Lauren Williams; Ann Carrigan; William Auffermann; Megan Mills; Anina Rich; Joann Elmore; Trafton Drew

The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology Journal Article

In: Psychonomic Bulletin & Review, vol. 28, no. 2, pp. 503–511, 2021.

Abstract | Links | BibTeX

@article{Williams2021a,
title = {The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology},
author = {Lauren Williams and Ann Carrigan and William Auffermann and Megan Mills and Anina Rich and Joann Elmore and Trafton Drew},
doi = {10.3758/s13423-020-01826-4},
year = {2021},
date = {2021-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {28},
number = {2},
pages = {503--511},
publisher = {Psychonomic Bulletin & Review},
abstract = {Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.

Close

  • doi:10.3758/s13423-020-01826-4

Close

Luming Zhang; Xiaoqin Zhang; Mingliang Xu; Ling Shao

Massive-scale aerial photo categorization by cross-resolution visual perception enhancement Journal Article

In: IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2021.

Abstract | Links | BibTeX

@article{Zhang2021e,
title = {Massive-scale aerial photo categorization by cross-resolution visual perception enhancement},
author = {Luming Zhang and Xiaoqin Zhang and Mingliang Xu and Ling Shao},
doi = {10.1109/TNNLS.2021.3055548},
year = {2021},
date = {2021-01-01},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
pages = {1--14},
abstract = {Categorizing aerial photographs with varied weather/lighting conditions and sophisticated geomorphic factors is a key module in autonomous navigation, environmental evaluation, and so on. Previous image recognizers cannot fulfill this task due to three challenges: 1) localizing visually/semantically salient regions within each aerial photograph in a weakly annotated context due to the unaffordable human resources required for pixel-level annotation; 2) aerial photographs are generally with multiple informative attributes (e.g., clarity and reflectivity), and we have to encode them for better aerial photograph modeling; and 3) designing a cross-domain knowledge transferal module to enhance aerial photograph perception since multiresolution aerial photographs are taken asynchronistically and are mutually complementary. To handle the above problems, we propose to optimize aerial photograph's feature learning by leveraging the low-resolution spatial composition to enhance the deep learning of perceptual features with a high resolution. More specifically, we first extract many BING-based object patches (Cheng et al., 2014) from each aerial photograph. A weakly supervised ranking algorithm selects a few semantically salient ones by seamlessly incorporating multiple aerial photograph attributes. Toward an interpretable aerial photograph recognizer indicative to human visual perception, we construct a gaze shifting path (GSP) by linking the top-ranking object patches and, subsequently, derive the deep GSP feature. Finally, a cross-domain multilabel SVM is formulated to categorize each aerial photograph. It leverages the global feature from low-resolution counterparts to optimize the deep GSP feature from a high-resolution aerial photograph. Comparative results on our compiled million-scale aerial photograph set have demonstrated the competitiveness of our approach. Besides, the eye-tracking experiment has shown that our ranking-based GSPs are over 92% consistent with the real human gaze shifting sequences.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Categorizing aerial photographs with varied weather/lighting conditions and sophisticated geomorphic factors is a key module in autonomous navigation, environmental evaluation, and so on. Previous image recognizers cannot fulfill this task due to three challenges: 1) localizing visually/semantically salient regions within each aerial photograph in a weakly annotated context due to the unaffordable human resources required for pixel-level annotation; 2) aerial photographs are generally with multiple informative attributes (e.g., clarity and reflectivity), and we have to encode them for better aerial photograph modeling; and 3) designing a cross-domain knowledge transferal module to enhance aerial photograph perception since multiresolution aerial photographs are taken asynchronistically and are mutually complementary. To handle the above problems, we propose to optimize aerial photograph's feature learning by leveraging the low-resolution spatial composition to enhance the deep learning of perceptual features with a high resolution. More specifically, we first extract many BING-based object patches (Cheng et al., 2014) from each aerial photograph. A weakly supervised ranking algorithm selects a few semantically salient ones by seamlessly incorporating multiple aerial photograph attributes. Toward an interpretable aerial photograph recognizer indicative to human visual perception, we construct a gaze shifting path (GSP) by linking the top-ranking object patches and, subsequently, derive the deep GSP feature. Finally, a cross-domain multilabel SVM is formulated to categorize each aerial photograph. It leverages the global feature from low-resolution counterparts to optimize the deep GSP feature from a high-resolution aerial photograph. Comparative results on our compiled million-scale aerial photograph set have demonstrated the competitiveness of our approach. Besides, the eye-tracking experiment has shown that our ranking-based GSPs are over 92&#x0025; consistent with the real human gaze shifting sequences.

Close

  • doi:10.1109/TNNLS.2021.3055548

Close

Xinru Zhang; Zhongling Pi; Chenyu Li; Weiping Hu

Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction Journal Article

In: British Journal of Educational Technology, vol. 52, no. 2, pp. 606–618, 2021.

Abstract | Links | BibTeX

@article{Zhang2021k,
title = {Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction},
author = {Xinru Zhang and Zhongling Pi and Chenyu Li and Weiping Hu},
doi = {10.1111/bjet.13045},
year = {2021},
date = {2021-01-01},
journal = {British Journal of Educational Technology},
volume = {52},
number = {2},
pages = {606--618},
abstract = {Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.

Close

  • doi:10.1111/bjet.13045

Close

Chiao I. Tseng; Jochen Laubrock; John A. Bateman

The impact of multimodal cohesion on attention and interpretation in film Journal Article

In: Discourse, Context and Media, vol. 44, pp. 100544, 2021.

Abstract | Links | BibTeX

@article{Tseng2021,
title = {The impact of multimodal cohesion on attention and interpretation in film},
author = {Chiao I. Tseng and Jochen Laubrock and John A. Bateman},
doi = {10.1016/j.dcm.2021.100544},
year = {2021},
date = {2021-01-01},
journal = {Discourse, Context and Media},
volume = {44},
pages = {100544},
publisher = {Elsevier Ltd},
abstract = {This article presents results of an exploratory investigation combining multimodal cohesion analysis and eye-tracking studies. Multimodal cohesion, as a tool of multimodal discourse analysis, goes beyond linguistic cohesive mechanisms to enable the construction of cross-modal discourse structures that systematically relate technical details of audio, visual and verbal modalities. Patterns of multimodal cohesion from these discourse structures were used to design eye-tracking experiments and questionnaires in order to empirically investigate how auditory and visual cohesive cues affect attention and comprehension. We argue that the cross-modal structures of cohesion revealed by our method offer a strong methodology for addressing empirical questions concerning viewers' comprehension of narrative settings and the comparative salience of visual, verbal and audio cues. Analyses are presented of the beginning of Hitchcock's The Birds (1963) and a sketch from Monty Python filmed in 1971. Our approach balances the narrative-based issue of how narrative elements in film guide meaning interpretation and the recipient-based question of where a film viewer's attention is directed during viewing and how this affects comprehension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This article presents results of an exploratory investigation combining multimodal cohesion analysis and eye-tracking studies. Multimodal cohesion, as a tool of multimodal discourse analysis, goes beyond linguistic cohesive mechanisms to enable the construction of cross-modal discourse structures that systematically relate technical details of audio, visual and verbal modalities. Patterns of multimodal cohesion from these discourse structures were used to design eye-tracking experiments and questionnaires in order to empirically investigate how auditory and visual cohesive cues affect attention and comprehension. We argue that the cross-modal structures of cohesion revealed by our method offer a strong methodology for addressing empirical questions concerning viewers' comprehension of narrative settings and the comparative salience of visual, verbal and audio cues. Analyses are presented of the beginning of Hitchcock's The Birds (1963) and a sketch from Monty Python filmed in 1971. Our approach balances the narrative-based issue of how narrative elements in film guide meaning interpretation and the recipient-based question of where a film viewer's attention is directed during viewing and how this affects comprehension.

Close

  • doi:10.1016/j.dcm.2021.100544

Close

Xiangling Wang; Tingting Wang; Ricardo Muñoz Martın; Yanfang Jia

Investigating usability in postediting neural machine translation: Evidence from translation trainees' self-perception and performance Journal Article

In: Across Languages and Cultures, vol. 22, no. 1, pp. 100–123, 2021.

Abstract | Links | BibTeX

@article{Wang2021j,
title = {Investigating usability in postediting neural machine translation: Evidence from translation trainees' self-perception and performance},
author = {Xiangling Wang and Tingting Wang and Ricardo Muñoz Martın and Yanfang Jia},
doi = {10.1556/084.2021.00006},
year = {2021},
date = {2021-01-01},
journal = {Across Languages and Cultures},
volume = {22},
number = {1},
pages = {100--123},
abstract = {This is a report on an empirical study on the usability for translation trainees of neural machine translation systems when post-editing (MTPE). Sixty Chinese translation trainees completed a questionnaire on their perceptions of MTPE's usability. Fifty of them later performed both a post-editing task and a regular translation task, designed to examine MTPE's usability by comparing their performance in terms of text processing speed, effort, and translation quality. Contrasting data collected by the questionnaire, keylogging, eyetracking and retrospective reports we found that, compared with regular, unaided translation, MTPE's usefulness in performance was remarkable: (1) it increased translation trainees' text processing speed and also improved their translation quality; (2) MTPE's ease of use in performance was partly proved in that it significantly reduced informants' effort as measured by (a) fixation duration and fixation counts; (b) total task time; and (c) the number of insertion keystrokes and total keystrokes. However, (3) translation trainees generally perceived MTPE to be useful to increase productivity, but they were skeptical about its use to improve quality. They were neutral towards the ease of use of MTPE.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This is a report on an empirical study on the usability for translation trainees of neural machine translation systems when post-editing (MTPE). Sixty Chinese translation trainees completed a questionnaire on their perceptions of MTPE's usability. Fifty of them later performed both a post-editing task and a regular translation task, designed to examine MTPE's usability by comparing their performance in terms of text processing speed, effort, and translation quality. Contrasting data collected by the questionnaire, keylogging, eyetracking and retrospective reports we found that, compared with regular, unaided translation, MTPE's usefulness in performance was remarkable: (1) it increased translation trainees' text processing speed and also improved their translation quality; (2) MTPE's ease of use in performance was partly proved in that it significantly reduced informants' effort as measured by (a) fixation duration and fixation counts; (b) total task time; and (c) the number of insertion keystrokes and total keystrokes. However, (3) translation trainees generally perceived MTPE to be useful to increase productivity, but they were skeptical about its use to improve quality. They were neutral towards the ease of use of MTPE.

Close

  • doi:10.1556/084.2021.00006

Close

Junming Zheng; Muhammad Waqqas Khan Tarin; Denghui Jiang; Min Li; Jing Ye; Lingyan Chen; Tianyou He; Yushan Zheng

Which ornamental features of bamboo plants will attract the people most? Journal Article

In: Urban Forestry and Urban Greening, vol. 61, pp. 127101, 2021.

Abstract | Links | BibTeX

@article{Zheng2021b,
title = {Which ornamental features of bamboo plants will attract the people most?},
author = {Junming Zheng and Muhammad Waqqas Khan Tarin and Denghui Jiang and Min Li and Jing Ye and Lingyan Chen and Tianyou He and Yushan Zheng},
doi = {10.1016/j.ufug.2021.127101},
year = {2021},
date = {2021-01-01},
journal = {Urban Forestry and Urban Greening},
volume = {61},
pages = {127101},
publisher = {Elsevier GmbH},
abstract = {Plant structure and architecture have a significant influence on how people interpret them. Bamboo plants have highly ornamental attributes, but the traits that attract people the most are still unknown. Therefore, to assess the people's preference for ornamental features of bamboo plants, eye-tracking measures (fixation count, percent of dwell time, pupil size, and saccade amplitude) and a questionnaire survey about subjective preference were conducted by ninety college students as the participants. The result showed that subjective ratings of stem color, leaf stripes, and stem stripes showed a significant positive correlation with the fixation count. The pupil size and saccade amplitude of different ornamental features were not correlated with the subjective ratings. According to random forest model, fixation count was the most influential aspect affecting subjective ratings. Based on integrated eye-tracking measures and subjective ratings, we conclude that people prefer the ornamental features like green stem, green stem with irregular yellow stripes or yellow stem with narrow green stripes, leaves with less number of stripes, normal stem, and tree. In addition, people prefer natural traits, for instance, green stem, normal stem, and tree, related to latent conscious belief and evolutionary adaptation. Abnormal traits, such as leaf stripes and stem stripes attract people's visual attention and interests, making the fixation count and increasing the percentage of dwell time. This study has significant implications for landscape experts in the design and maintenance of ornamental bamboo plantations in China as well as in other areas of the world.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Plant structure and architecture have a significant influence on how people interpret them. Bamboo plants have highly ornamental attributes, but the traits that attract people the most are still unknown. Therefore, to assess the people's preference for ornamental features of bamboo plants, eye-tracking measures (fixation count, percent of dwell time, pupil size, and saccade amplitude) and a questionnaire survey about subjective preference were conducted by ninety college students as the participants. The result showed that subjective ratings of stem color, leaf stripes, and stem stripes showed a significant positive correlation with the fixation count. The pupil size and saccade amplitude of different ornamental features were not correlated with the subjective ratings. According to random forest model, fixation count was the most influential aspect affecting subjective ratings. Based on integrated eye-tracking measures and subjective ratings, we conclude that people prefer the ornamental features like green stem, green stem with irregular yellow stripes or yellow stem with narrow green stripes, leaves with less number of stripes, normal stem, and tree. In addition, people prefer natural traits, for instance, green stem, normal stem, and tree, related to latent conscious belief and evolutionary adaptation. Abnormal traits, such as leaf stripes and stem stripes attract people's visual attention and interests, making the fixation count and increasing the percentage of dwell time. This study has significant implications for landscape experts in the design and maintenance of ornamental bamboo plantations in China as well as in other areas of the world.

Close

  • doi:10.1016/j.ufug.2021.127101

Close

2020

Hanna Brinkmann; Louis Williams; Raphael Rosenberg; Eugene McSorley

Does 'action viewing' really exist? Perceived dynamism and viewing behaviour Journal Article

In: Art and Perception, vol. 8, no. 1, pp. 27–48, 2020.

Abstract | Links | BibTeX

@article{Brinkmann2020,
title = {Does 'action viewing' really exist? Perceived dynamism and viewing behaviour},
author = {Hanna Brinkmann and Louis Williams and Raphael Rosenberg and Eugene McSorley},
doi = {10.1163/22134913-20191128},
year = {2020},
date = {2020-12-01},
journal = {Art and Perception},
volume = {8},
number = {1},
pages = {27--48},
publisher = {Brill},
abstract = {Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's 'action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants - 20 laypeople and 20 experts (art students) - and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour - something that has long been assumed by art historians but had never been empirically supported.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Throughout the 20th century, there have been many different forms of abstract painting. While works by some artists, e.g., Piet Mondrian, are usually described as static, others are described as dynamic, such as Jackson Pollock's 'action paintings'. Art historians have assumed that beholders not only conceptualise such differences in depicted dynamics but also mirror these in their viewing behaviour. In an interdisciplinary eye-tracking study, we tested this concept through investigating both the localisation of fixations (polyfocal viewing) and the average duration of fixations as well as saccade velocity, duration and path curvature. We showed 30 different abstract paintings to 40 participants - 20 laypeople and 20 experts (art students) - and used self-reporting to investigate the perceived dynamism of each painting and its relationship with (a) the average number and duration of fixations, (b) the average number, duration and velocity of saccades as well as the amplitude and curvature area of saccade paths, and (c) pleasantness and familiarity ratings. We found that the average number of fixations and saccades, saccade velocity, and pleasantness ratings increase with an increase in perceived dynamism ratings. Meanwhile the saccade duration decreased with an increase in perceived dynamism. Additionally, the analysis showed that experts gave higher dynamic ratings compared to laypeople and were more familiar with the artworks. These results indicate that there is a correlation between perceived dynamism in abstract painting and viewing behaviour - something that has long been assumed by art historians but had never been empirically supported.

Close

  • doi:10.1163/22134913-20191128

Close

Elisa Infanti; D. Samuel Schwarzkopf

Mapping sequences can bias population receptive field estimates Journal Article

In: NeuroImage, vol. 211, pp. 116636, 2020.

Abstract | Links | BibTeX

@article{Infanti2020,
title = {Mapping sequences can bias population receptive field estimates},
author = {Elisa Infanti and D. Samuel Schwarzkopf},
doi = {10.1016/j.neuroimage.2020.116636},
year = {2020},
date = {2020-05-01},
journal = {NeuroImage},
volume = {211},
pages = {116636},
publisher = {Elsevier Ltd},
abstract = {Population receptive field (pRF) modelling is a common technique for estimating the stimulus-selectivity of populations of neurons using neuroimaging. Here, we aimed to address if pRF properties estimated with this method depend on the spatio-temporal structure and the predictability of the mapping stimulus. We mapped the polar angle preference and tuning width of voxels in visual cortex (V1–V4) of healthy, adult volunteers. We compared sequences sweeping orderly through the visual field or jumping from location to location employing stimuli of different width (45° vs 6°) and cycles of variable duration (8s vs 60s). While we did not observe any systematic influence of stimulus predictability, the temporal structure of the sequences significantly affected tuning width estimates. Ordered designs with large wedges and short cycles produced systematically smaller estimates than random sequences. Interestingly, when we used small wedges and long cycles, we obtained larger tuning width estimates for ordered than random sequences. We suggest that ordered and random mapping protocols show different susceptibility to other design choices such as stimulus type and duration of the mapping cycle and can produce significantly different pRF results.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Population receptive field (pRF) modelling is a common technique for estimating the stimulus-selectivity of populations of neurons using neuroimaging. Here, we aimed to address if pRF properties estimated with this method depend on the spatio-temporal structure and the predictability of the mapping stimulus. We mapped the polar angle preference and tuning width of voxels in visual cortex (V1–V4) of healthy, adult volunteers. We compared sequences sweeping orderly through the visual field or jumping from location to location employing stimuli of different width (45° vs 6°) and cycles of variable duration (8s vs 60s). While we did not observe any systematic influence of stimulus predictability, the temporal structure of the sequences significantly affected tuning width estimates. Ordered designs with large wedges and short cycles produced systematically smaller estimates than random sequences. Interestingly, when we used small wedges and long cycles, we obtained larger tuning width estimates for ordered than random sequences. We suggest that ordered and random mapping protocols show different susceptibility to other design choices such as stimulus type and duration of the mapping cycle and can produce significantly different pRF results.

Close

  • doi:10.1016/j.neuroimage.2020.116636

Close

Jaana Simola; Jarmo Kuisma; Johanna K. Kaakinen

Attention, memory and preference for direct and indirect print advertisements Journal Article

In: Journal of Business Research, vol. 111, pp. 249–261, 2020.

Abstract | Links | BibTeX

@article{Simola2020,
title = {Attention, memory and preference for direct and indirect print advertisements},
author = {Jaana Simola and Jarmo Kuisma and Johanna K. Kaakinen},
doi = {10.1016/j.jbusres.2019.06.028},
year = {2020},
date = {2020-04-01},
journal = {Journal of Business Research},
volume = {111},
pages = {249--261},
publisher = {Elsevier Inc.},
abstract = {We examined the effectiveness of direct and indirect advertising. Direct ads openly depict advertised products and brands. In indirect ads, the ad message requires elaboration. Eye movements were recorded while consumers viewed direct and indirect advertisements under fixed (5 s) or unlimited exposure time. Recognition of ads, brand logos and preference for brands were tested under two different delays (after 24 h or 45 min) from the ad exposure. The total viewing time was longer for the indirect ads when exposure time was unlimited. Overall, ad pictorials received more fixations and the brand preference was higher in the indirect condition. Recognition improved for brand logos of indirect ads when tested after the shorter delay. Consumers experienced indirect ads as more original, surprising, intellectually challenging and harder to interpret than direct ads. Current results indicate that indirect ads elicit cognitive elaboration that translates into higher preference and memorability for brands.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We examined the effectiveness of direct and indirect advertising. Direct ads openly depict advertised products and brands. In indirect ads, the ad message requires elaboration. Eye movements were recorded while consumers viewed direct and indirect advertisements under fixed (5 s) or unlimited exposure time. Recognition of ads, brand logos and preference for brands were tested under two different delays (after 24 h or 45 min) from the ad exposure. The total viewing time was longer for the indirect ads when exposure time was unlimited. Overall, ad pictorials received more fixations and the brand preference was higher in the indirect condition. Recognition improved for brand logos of indirect ads when tested after the shorter delay. Consumers experienced indirect ads as more original, surprising, intellectually challenging and harder to interpret than direct ads. Current results indicate that indirect ads elicit cognitive elaboration that translates into higher preference and memorability for brands.

Close

  • doi:10.1016/j.jbusres.2019.06.028

Close

Wang Tong

An eye-movement experimental study of college students aesthetic preference for the shape of desk lamp Journal Article

In: International Journal of Trend in Research and Development, vol. 7, no. 3, pp. 146–148, 2020.

Abstract | BibTeX

@article{Tong2020,
title = {An eye-movement experimental study of college students aesthetic preference for the shape of desk lamp},
author = {Wang Tong},
year = {2020},
date = {2020-01-01},
journal = {International Journal of Trend in Research and Development},
volume = {7},
number = {3},
pages = {146--148},
abstract = {Taking Table Lamp as the research object, the eye movement analysis method and subjective questionnaire survey method are used to explore the aesthetic preference of college students for the shape of table Lamp through the comprehensive analysis of the eye movement data of the subjects and the subjective questionnaire survey data, so as to provide design reference for enterprises and peer designers. An Sr research eyelink helmet-mounted oculomotor is used to record the eye movement characteristics of 20 subjects during viewing pictures of different Table Lamp shapes. The results show that the modern simplicity style is the most popular. The second is European style and Chinese style.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Taking Table Lamp as the research object, the eye movement analysis method and subjective questionnaire survey method are used to explore the aesthetic preference of college students for the shape of table Lamp through the comprehensive analysis of the eye movement data of the subjects and the subjective questionnaire survey data, so as to provide design reference for enterprises and peer designers. An Sr research eyelink helmet-mounted oculomotor is used to record the eye movement characteristics of 20 subjects during viewing pictures of different Table Lamp shapes. The results show that the modern simplicity style is the most popular. The second is European style and Chinese style.

Close

Jiawen Zhu; Kara Dawson; Albert D Ritzhaupt; Pavlo Pasha Antonenko

Investigating how multimedia and modality design principles influence student learning performance, satisfaction, mental effort, and visual attention Journal Article

In: Journal of Educational Multimedia and Hypermedia, vol. 29, no. 3, pp. 265–284, 2020.

Abstract | BibTeX

@article{Zhu2020,
title = {Investigating how multimedia and modality design principles influence student learning performance, satisfaction, mental effort, and visual attention},
author = {Jiawen Zhu and Kara Dawson and Albert D Ritzhaupt and Pavlo Pasha Antonenko},
year = {2020},
date = {2020-01-01},
journal = {Journal of Educational Multimedia and Hypermedia},
volume = {29},
number = {3},
pages = {265--284},
abstract = {This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning

Close

Louis Williams; Eugene McSorley; Rachel McCloy

Enhanced associations with actions of the artist influence gaze behaviour Journal Article

In: i-Perception, vol. 11, no. 2, pp. 1–25, 2020.

Abstract | Links | BibTeX

@article{Williams2020,
title = {Enhanced associations with actions of the artist influence gaze behaviour},
author = {Louis Williams and Eugene McSorley and Rachel McCloy},
doi = {10.1177/2041669520911059},
year = {2020},
date = {2020-01-01},
journal = {i-Perception},
volume = {11},
number = {2},
pages = {1--25},
abstract = {The aesthetic experience of the perceiver of art has been suggested to relate to the art-making process of the artist. The artist's gestures during the creation process have been stated to influence the perceiver's art-viewing experience. However, limited studies explore the art-viewing experience in relation to the creative process of the artist. We introduced eye-tracking measures to further establish how congruent actions with the artist influence perceiver's gaze behaviour. Experiments 1 and 2 showed that simultaneous congruent and incongruent actions do not influence gaze behaviour. However, brushstroke paintings were found to be more pleasing than pointillism paintings. In Experiment 3, participants were trained to associate painting actions with hand primes to enhance visuomotor and visuovisual associations with the artist's actions. A greater amount of time was spent fixating brushstroke paintings when presented with a congruent prime compared with an incongruent prime, and fewer fixations were made to these styles of paintings when presented with an incongruent prime. The results suggest that explicit links that allow perceivers to resonate with the artist's actions lead to greater exploration of preferred artwork styles.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The aesthetic experience of the perceiver of art has been suggested to relate to the art-making process of the artist. The artist's gestures during the creation process have been stated to influence the perceiver's art-viewing experience. However, limited studies explore the art-viewing experience in relation to the creative process of the artist. We introduced eye-tracking measures to further establish how congruent actions with the artist influence perceiver's gaze behaviour. Experiments 1 and 2 showed that simultaneous congruent and incongruent actions do not influence gaze behaviour. However, brushstroke paintings were found to be more pleasing than pointillism paintings. In Experiment 3, participants were trained to associate painting actions with hand primes to enhance visuomotor and visuovisual associations with the artist's actions. A greater amount of time was spent fixating brushstroke paintings when presented with a congruent prime compared with an incongruent prime, and fewer fixations were made to these styles of paintings when presented with an incongruent prime. The results suggest that explicit links that allow perceivers to resonate with the artist's actions lead to greater exploration of preferred artwork styles.

Close

  • doi:10.1177/2041669520911059

Close

Liis Uiga; Catherine M. Capio; Donghyun Ryu; William R. Young; Mark R. Wilson; Thomson W. L. Wong; Andy C. Y. Tse; Rich S. W. Masters

The role of movement-specific reinvestment in visuomotor control of walking by older adults Journal Article

In: Journals of Gerontology - Series B Psychological Sciences and Social Sciences, vol. 75, no. 2, pp. 282–292, 2020.

Abstract | Links | BibTeX

@article{Uiga2020,
title = {The role of movement-specific reinvestment in visuomotor control of walking by older adults},
author = {Liis Uiga and Catherine M. Capio and Donghyun Ryu and William R. Young and Mark R. Wilson and Thomson W. L. Wong and Andy C. Y. Tse and Rich S. W. Masters},
doi = {10.1093/geronb/gby078},
year = {2020},
date = {2020-01-01},
journal = {Journals of Gerontology - Series B Psychological Sciences and Social Sciences},
volume = {75},
number = {2},
pages = {282--292},
abstract = {Objectives: The aim of this study was to examine the association between conscious monitoring and control of movements (i.e., movement-specific reinvestment) and visuomotor control during walking by older adults. Method: The Movement-Specific Reinvestment Scale (MSRS) was administered to 92 community-dwelling older adults, aged 65-81 years, who were required to walk along a 4.8-m walkway and step on the middle of a target as accurately as possible. Participants' movement kinematics and gaze behavior were measured during approach to the target and when stepping on it. Results: High scores on the MSRS were associated with prolonged stance and double support times during approach to the stepping target, and less accurate foot placement when stepping on the target. No associations between MSRS and gaze behavior were observed. Discussion: Older adults with a high propensity for movement-specific reinvestment seem to need more time to "plan" future stepping movements, yet show worse stepping accuracy than older adults with a low propensity for movement-specific reinvestment. Future research should examine whether older adults with a higher propensity for reinvestment are more likely to display movement errors that lead to falling.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objectives: The aim of this study was to examine the association between conscious monitoring and control of movements (i.e., movement-specific reinvestment) and visuomotor control during walking by older adults. Method: The Movement-Specific Reinvestment Scale (MSRS) was administered to 92 community-dwelling older adults, aged 65-81 years, who were required to walk along a 4.8-m walkway and step on the middle of a target as accurately as possible. Participants' movement kinematics and gaze behavior were measured during approach to the target and when stepping on it. Results: High scores on the MSRS were associated with prolonged stance and double support times during approach to the stepping target, and less accurate foot placement when stepping on the target. No associations between MSRS and gaze behavior were observed. Discussion: Older adults with a high propensity for movement-specific reinvestment seem to need more time to "plan" future stepping movements, yet show worse stepping accuracy than older adults with a low propensity for movement-specific reinvestment. Future research should examine whether older adults with a higher propensity for reinvestment are more likely to display movement errors that lead to falling.

Close

  • doi:10.1093/geronb/gby078

Close

Brenda M Stoesz; Jessica Sutton

Defining the visual complexity of learning management systems using image metrics and subjective ratings Journal Article

In: Canadian Journal of Learning and Technology, vol. 46, no. 2, pp. 1–21, 2020.

Abstract | Links | BibTeX

@article{Stoesz2020,
title = {Defining the visual complexity of learning management systems using image metrics and subjective ratings},
author = {Brenda M Stoesz and Jessica Sutton},
doi = {10.21432/cjlt27899},
year = {2020},
date = {2020-01-01},
journal = {Canadian Journal of Learning and Technology},
volume = {46},
number = {2},
pages = {1--21},
abstract = {Research has demonstrated that students' learning outcomes and motivation to learn are influenced by the visual design of learning technologies (e.g., learning management systems or LMS). One aspect of LMS design that has not been thoroughly investigated is visual complexity. In two experiments, postsecondary students rated the visual complexity of images of LMS after exposure durations of 50-500 ms. Perceptions of complexity were positively correlated across timed conditions and working memory capacity was associated with complexity ratings. Low-level image metrics were also found to predict perceptions of the LMS complexity. Results demonstrate the importance of the visual design of learning technologies and suggest that additional research on the impact of LMS visual complexity on learning outcomes is warranted.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Research has demonstrated that students' learning outcomes and motivation to learn are influenced by the visual design of learning technologies (e.g., learning management systems or LMS). One aspect of LMS design that has not been thoroughly investigated is visual complexity. In two experiments, postsecondary students rated the visual complexity of images of LMS after exposure durations of 50-500 ms. Perceptions of complexity were positively correlated across timed conditions and working memory capacity was associated with complexity ratings. Low-level image metrics were also found to predict perceptions of the LMS complexity. Results demonstrate the importance of the visual design of learning technologies and suggest that additional research on the impact of LMS visual complexity on learning outcomes is warranted.

Close

  • doi:10.21432/cjlt27899

Close

Jorrig Vogels; David M. Howcroft; Elli Tourtouri; Vera Demberg

How speakers adapt object descriptions to listeners under load Journal Article

In: Language, Cognition and Neuroscience, vol. 35, no. 1, pp. 78–92, 2020.

Abstract | Links | BibTeX

@article{Vogels2020,
title = {How speakers adapt object descriptions to listeners under load},
author = {Jorrig Vogels and David M. Howcroft and Elli Tourtouri and Vera Demberg},
doi = {10.1080/23273798.2019.1648839},
year = {2020},
date = {2020-01-01},
journal = {Language, Cognition and Neuroscience},
volume = {35},
number = {1},
pages = {78--92},
publisher = {Taylor & Francis},
abstract = {A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener's needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener's reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener's cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener's needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener's needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener's reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener's cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener's needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.

Close

  • doi:10.1080/23273798.2019.1648839

Close

Ye Xia; Mauro Manassi; Ken Nakayama; Karl Zipser; David Whitney

Visual crowding in driving Journal Article

In: Journal of Vision, vol. 20, no. 6, pp. 1–17, 2020.

Abstract | Links | BibTeX

@article{Xia2020,
title = {Visual crowding in driving},
author = {Ye Xia and Mauro Manassi and Ken Nakayama and Karl Zipser and David Whitney},
doi = {10.1167/jov.20.6.1},
year = {2020},
date = {2020-01-01},
journal = {Journal of Vision},
volume = {20},
number = {6},
pages = {1--17},
abstract = {Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations.

Close

  • doi:10.1167/jov.20.6.1

Close

Pedro G. Vieira; Matthew R. Krause; Christopher C. Pack

tACS entrains neural activity while somatosensory input is blocked Journal Article

In: PLoS Biology, vol. 18, no. 10, pp. 1–14, 2020.

Abstract | Links | BibTeX

@article{Vieira2020,
title = {tACS entrains neural activity while somatosensory input is blocked},
author = {Pedro G. Vieira and Matthew R. Krause and Christopher C. Pack},
doi = {10.1371/journal.pbio.3000834},
year = {2020},
date = {2020-01-01},
journal = {PLoS Biology},
volume = {18},
number = {10},
pages = {1--14},
abstract = {Transcranial alternating current stimulation (tACS) modulates brain activity by passing electrical current through electrodes that are attached to the scalp. Because it is safe and noninvasive, tACS holds great promise as a tool for basic research and clinical treatment. However, little is known about how tACS ultimately influences neural activity. One hypothesis is that tACS affects neural responses directly, by producing electrical fields that interact with the brain's endogenous electrical activity. By controlling the shape and location of these electric fields, one could target brain regions associated with particular behaviors or symptoms. However, an alternative hypothesis is that tACS affects neural activity indirectly, via peripheral sensory afferents. In particular, it has often been hypothesized that tACS acts on sensory fibers in the skin, which in turn provide rhythmic input to central neurons. In this case, there would be little possibility of targeted brain stimulation, as the regions modulated by tACS would depend entirely on the somatosensory pathways originating in the skin around the stimulating electrodes. Here, we directly test these competing hypotheses by recording single-unit activity in the hippocampus and visual cortex of alert monkeys receiving tACS. We find that tACS entrains neuronal activity in both regions, so that cells fire synchronously with the stimulation. Blocking somatosensory input with a topical anesthetic does not significantly alter these neural entrainment effects. These data are therefore consistent with the direct stimulation hypothesis and suggest that peripheral somatosensory stimulation is not required for tACS to entrain neurons.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Transcranial alternating current stimulation (tACS) modulates brain activity by passing electrical current through electrodes that are attached to the scalp. Because it is safe and noninvasive, tACS holds great promise as a tool for basic research and clinical treatment. However, little is known about how tACS ultimately influences neural activity. One hypothesis is that tACS affects neural responses directly, by producing electrical fields that interact with the brain's endogenous electrical activity. By controlling the shape and location of these electric fields, one could target brain regions associated with particular behaviors or symptoms. However, an alternative hypothesis is that tACS affects neural activity indirectly, via peripheral sensory afferents. In particular, it has often been hypothesized that tACS acts on sensory fibers in the skin, which in turn provide rhythmic input to central neurons. In this case, there would be little possibility of targeted brain stimulation, as the regions modulated by tACS would depend entirely on the somatosensory pathways originating in the skin around the stimulating electrodes. Here, we directly test these competing hypotheses by recording single-unit activity in the hippocampus and visual cortex of alert monkeys receiving tACS. We find that tACS entrains neuronal activity in both regions, so that cells fire synchronously with the stimulation. Blocking somatosensory input with a topical anesthetic does not significantly alter these neural entrainment effects. These data are therefore consistent with the direct stimulation hypothesis and suggest that peripheral somatosensory stimulation is not required for tACS to entrain neurons.

Close

  • doi:10.1371/journal.pbio.3000834

Close

Yan Zhou

Psychological analysis of online teaching in colleges based on eye-tracking technology Journal Article

In: Revista Argentina de Clinica Psicologica, vol. 29, no. 2, pp. 523–529, 2020.

Abstract | Links | BibTeX

@article{Zhou2020a,
title = {Psychological analysis of online teaching in colleges based on eye-tracking technology},
author = {Yan Zhou},
doi = {10.24205/03276716.2020.272},
year = {2020},
date = {2020-01-01},
journal = {Revista Argentina de Clinica Psicologica},
volume = {29},
number = {2},
pages = {523--529},
abstract = {Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges.

Close

  • doi:10.24205/03276716.2020.272

Close

Chenzhu Zhao

Near or far? The effect of latest booking time on hotel booking intention: Based on eye-tracking experiments Journal Article

In: International Journal of Frontiers in Sociology, vol. 2, no. 7, pp. 1–12, 2020.

Abstract | BibTeX

@article{Zhao2020a,
title = {Near or far? The effect of latest booking time on hotel booking intention: Based on eye-tracking experiments},
author = {Chenzhu Zhao},
year = {2020},
date = {2020-01-01},
journal = {International Journal of Frontiers in Sociology},
volume = {2},
number = {7},
pages = {1--12},
abstract = {Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV.

Close

Huiru Shao; Jing Li; Wenbo Wan; Huaxiang Zhang; Jiande Sun

Saccadic trajectory-based identity authentication Journal Article

In: Multimedia Tools and Applications, vol. 79, no. 7-8, pp. 4891–4905, 2020.

Abstract | Links | BibTeX

@article{Shao2020,
title = {Saccadic trajectory-based identity authentication},
author = {Huiru Shao and Jing Li and Wenbo Wan and Huaxiang Zhang and Jiande Sun},
doi = {10.1007/s11042-018-6816-5},
year = {2020},
date = {2020-01-01},
journal = {Multimedia Tools and Applications},
volume = {79},
number = {7-8},
pages = {4891--4905},
publisher = {Multimedia Tools and Applications},
abstract = {The saccadic trajectory is generated by extra-ocular muscles in the eyes, which is a complex mechanism related to brain-driven neural signal. The saccadic trajectory has the characteristics of non-reproducibility and non-contact. In this paper, we propose a saccadic trajectory-based identity authentication method considering that saccadic trajectory can be used as a behavior-based biometric. In this method, we adopt Velocity-Threshold (I-VT) algorithm to extract saccadic trajectories from the whole eye movement data, extract features via wavelet packet transform and authenticate the identity via classifying these features by SVM. In this paper, we verify the proposed method on EMDBv1.0 dataset for horizontal eye movements. We select one subject to be the host and randomly choose another 50 subjects from the remaining 58 subjects as the attackers. We achieve the best performance via optimizing feature selection and the parameter of SVM. The experiment results show that the average accuracy for accepting the host can reach 98.09%, and the average accuracy for rejecting the attackers can reach 99.55%. It demonstrates that the saccadic trajectory-based identity authentication is promising in information security.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The saccadic trajectory is generated by extra-ocular muscles in the eyes, which is a complex mechanism related to brain-driven neural signal. The saccadic trajectory has the characteristics of non-reproducibility and non-contact. In this paper, we propose a saccadic trajectory-based identity authentication method considering that saccadic trajectory can be used as a behavior-based biometric. In this method, we adopt Velocity-Threshold (I-VT) algorithm to extract saccadic trajectories from the whole eye movement data, extract features via wavelet packet transform and authenticate the identity via classifying these features by SVM. In this paper, we verify the proposed method on EMDBv1.0 dataset for horizontal eye movements. We select one subject to be the host and randomly choose another 50 subjects from the remaining 58 subjects as the attackers. We achieve the best performance via optimizing feature selection and the parameter of SVM. The experiment results show that the average accuracy for accepting the host can reach 98.09%, and the average accuracy for rejecting the attackers can reach 99.55%. It demonstrates that the saccadic trajectory-based identity authentication is promising in information security.

Close

  • doi:10.1007/s11042-018-6816-5

Close

Bao Zhang; Shuhui Liu; Cenlou Hu; Ziwen Luo; Sai Huang; Jie Sui

Enhanced memory-driven attentional capture in action video game players Journal Article

In: Computers in Human Behavior, vol. 107, pp. 1–7, 2020.

Abstract | Links | BibTeX

@article{Zhang2020a,
title = {Enhanced memory-driven attentional capture in action video game players},
author = {Bao Zhang and Shuhui Liu and Cenlou Hu and Ziwen Luo and Sai Huang and Jie Sui},
doi = {10.1016/j.chb.2020.106271},
year = {2020},
date = {2020-01-01},
journal = {Computers in Human Behavior},
volume = {107},
pages = {1--7},
publisher = {Elsevier Ltd},
abstract = {Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations.

Close

  • doi:10.1016/j.chb.2020.106271

Close

Nino Sharvashidze; Alexander C Schütz

Task-dependent eye-movement patterns in viewing art Journal Article

In: Journal of Eye Movement Research, vol. 13, no. 2, pp. 1–17, 2020.

Abstract | BibTeX

@article{Sharvashidze2020,
title = {Task-dependent eye-movement patterns in viewing art},
author = {Nino Sharvashidze and Alexander C Schütz},
year = {2020},
date = {2020-01-01},
journal = {Journal of Eye Movement Research},
volume = {13},
number = {2},
pages = {1--17},
abstract = {In art schools and classes for art history students are trained to pay attention to different aspects of an artwork, such as art movement characteristics and painting techniques. Experts are better at processing style and visual features of an artwork than nonprofessionals. Here we tested the hypothesis that experts in art use different, task-dependent viewing strategies than nonprofes- sionals when analyzing a piece of art. We compared a group of art history students with a group of students with no art education background, while viewing 36 paintings under three discrim- ination tasks. Participants were asked to determine the art movement, the date and the medium of the paintings. We analyzed behavioral and eye-movement data of 27 participants. Our ob- servers adjusted their viewing strategies according to the task, resulting in longer fixation du- rations and shorter saccade amplitudes for the medium detection task. We found higher task accuracy and subjective confidence, less congruence and higher dispersion in fixation locations in experts. Expertise also influenced saccade metrics, biasing it towards larger saccade ampli- tudes, advocating a more holistic scanning strategy of experts in all three tasks.wubble:},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In art schools and classes for art history students are trained to pay attention to different aspects of an artwork, such as art movement characteristics and painting techniques. Experts are better at processing style and visual features of an artwork than nonprofessionals. Here we tested the hypothesis that experts in art use different, task-dependent viewing strategies than nonprofes- sionals when analyzing a piece of art. We compared a group of art history students with a group of students with no art education background, while viewing 36 paintings under three discrim- ination tasks. Participants were asked to determine the art movement, the date and the medium of the paintings. We analyzed behavioral and eye-movement data of 27 participants. Our ob- servers adjusted their viewing strategies according to the task, resulting in longer fixation du- rations and shorter saccade amplitudes for the medium detection task. We found higher task accuracy and subjective confidence, less congruence and higher dispersion in fixation locations in experts. Expertise also influenced saccade metrics, biasing it towards larger saccade ampli- tudes, advocating a more holistic scanning strategy of experts in all three tasks.wubble:

Close

Byunghoon “Tony” Ahn; Jason M. Harley

Facial expressions when learning with a Queer History App: Application of the control value theory of achievement emotions Journal Article

In: British Journal of Educational Technology, vol. 51, no. 5, pp. 1563–1576, 2020.

Abstract | Links | BibTeX

@article{Ahn2020,
title = {Facial expressions when learning with a Queer History App: Application of the control value theory of achievement emotions},
author = {Byunghoon “Tony” Ahn and Jason M. Harley},
doi = {10.1111/bjet.12989},
year = {2020},
date = {2020-01-01},
journal = {British Journal of Educational Technology},
volume = {51},
number = {5},
pages = {1563--1576},
abstract = {Learning analytics (LA) incorporates analyzing cognitive, social and emotional processes in learning scenarios to make informed decisions regarding instructional design and delivery. Research has highlighted important roles that emotions play in learning. We have extended this field of research by exploring the role of emotions in a relatively uncommon learning scenario: learning about queer history with a multimedia mobile app. Specifically, we used an automatic facial recognition software (FaceReader 7) to measure learners' discrete emotions and a counter-balanced multiple-choice quiz to assess learning. We also used an eye tracker (EyeLink 1000) to identify the emotions learners experienced while they read specific content, as opposed to the emotions they experienced over the course of the entire learning session. A total of 33 out of 57 of the learners' data were eligible to be analyzed. Results revealed that learners expressed more negative-activating emotions (ie, anger, anxiety) and negative-deactivating emotions (ie, sadness) than positive-activating emotions (ie, happiness). Learners with an angry emotion profile had the highest learning gains. The importance of examining typically undesirable emotions in learning, such as anger, is discussed using the control-value theory of achievement emotions. Further, this study describes a multimodal methodology to integrate behavioral trace data into learning analytics research.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Learning analytics (LA) incorporates analyzing cognitive, social and emotional processes in learning scenarios to make informed decisions regarding instructional design and delivery. Research has highlighted important roles that emotions play in learning. We have extended this field of research by exploring the role of emotions in a relatively uncommon learning scenario: learning about queer history with a multimedia mobile app. Specifically, we used an automatic facial recognition software (FaceReader 7) to measure learners' discrete emotions and a counter-balanced multiple-choice quiz to assess learning. We also used an eye tracker (EyeLink 1000) to identify the emotions learners experienced while they read specific content, as opposed to the emotions they experienced over the course of the entire learning session. A total of 33 out of 57 of the learners' data were eligible to be analyzed. Results revealed that learners expressed more negative-activating emotions (ie, anger, anxiety) and negative-deactivating emotions (ie, sadness) than positive-activating emotions (ie, happiness). Learners with an angry emotion profile had the highest learning gains. The importance of examining typically undesirable emotions in learning, such as anger, is discussed using the control-value theory of achievement emotions. Further, this study describes a multimodal methodology to integrate behavioral trace data into learning analytics research.

Close

  • doi:10.1111/bjet.12989

Close

Hamidreza Azemati; Fatemeh Jam; Modjtaba Ghorbani; Matthias Dehmer; Reza Ebrahimpour; Abdolhamid Ghanbaran; Frank Emmert-Streib

The role of symmetry in the aesthetics of residential building façades using cognitive science methods Journal Article

In: Symmetry, vol. 12, pp. 1–15, 2020.

Abstract | Links | BibTeX

@article{Azemati2020,
title = {The role of symmetry in the aesthetics of residential building façades using cognitive science methods},
author = {Hamidreza Azemati and Fatemeh Jam and Modjtaba Ghorbani and Matthias Dehmer and Reza Ebrahimpour and Abdolhamid Ghanbaran and Frank Emmert-Streib},
doi = {10.3390/sym12091438},
year = {2020},
date = {2020-01-01},
journal = {Symmetry},
volume = {12},
pages = {1--15},
abstract = {Symmetry is an important visual feature for humans and its application in architecture is completely evident. This paper aims to investigate the role of symmetry in the aesthetics judgment of residential building façades and study the pattern of eye movement based on the expertise of subjects in architecture. In order to implement this in the present paper, we have created images in two categories: symmetrical and asymmetrical façade images. The experiment design allows us to investigate the preference of subjects and their reaction time to decide about presented images as well as record their eye movements. It was inferred that the aesthetic experience of a building façade is influenced by the expertise of the subjects. There is a significant difference between experts and non-experts in all conditions, and symmetrical façades are in line with the taste of non-expert subjects. Moreover, the patterns of fixational eye movements indicate that the horizontal or vertical symmetry (mirror symmetry) has a profound influence on the observer's attention, but there is a difference in the points watched and their fixation duration. Thus, although symmetry may attract the same attention during eye movements on façade images, it does not necessarily lead to the same preference between the expert and non-expert groups.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Symmetry is an important visual feature for humans and its application in architecture is completely evident. This paper aims to investigate the role of symmetry in the aesthetics judgment of residential building façades and study the pattern of eye movement based on the expertise of subjects in architecture. In order to implement this in the present paper, we have created images in two categories: symmetrical and asymmetrical façade images. The experiment design allows us to investigate the preference of subjects and their reaction time to decide about presented images as well as record their eye movements. It was inferred that the aesthetic experience of a building façade is influenced by the expertise of the subjects. There is a significant difference between experts and non-experts in all conditions, and symmetrical façades are in line with the taste of non-expert subjects. Moreover, the patterns of fixational eye movements indicate that the horizontal or vertical symmetry (mirror symmetry) has a profound influence on the observer's attention, but there is a difference in the points watched and their fixation duration. Thus, although symmetry may attract the same attention during eye movements on façade images, it does not necessarily lead to the same preference between the expert and non-expert groups.

Close

  • doi:10.3390/sym12091438

Close

Anissa Boutabla; Samuel Cavuscens; Maurizio Ranieri; Céline Crétallaz; Herman Kingma; Raymond Berg; Nils Guinand; Angélica Pérez Fornos

Simultaneous activation of multiple vestibular pathways upon electrical stimulation of semicircular canal afferents Journal Article

In: Journal of Neurology, vol. 267, no. 1, pp. S273–S284, 2020.

Abstract | Links | BibTeX

@article{Boutabla2020,
title = {Simultaneous activation of multiple vestibular pathways upon electrical stimulation of semicircular canal afferents},
author = {Anissa Boutabla and Samuel Cavuscens and Maurizio Ranieri and Céline Crétallaz and Herman Kingma and Raymond Berg and Nils Guinand and Angélica Pérez Fornos},
doi = {10.1007/s00415-020-10120-1},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neurology},
volume = {267},
number = {1},
pages = {S273--S284},
publisher = {Springer Berlin Heidelberg},
abstract = {Background and purpose: Vestibular implants seem to be a promising treatment for patients suffering from severe bilateral vestibulopathy. To optimize outcomes, we need to investigate how, and to which extent, the different vestibular pathways are activated. Here we characterized the simultaneous responses to electrical stimuli of three different vestibular pathways. Methods: Three vestibular implant recipients were included. First, activation thresholds and amplitude growth functions of electrically evoked vestibulo-ocular reflexes (eVOR), cervical myogenic potentials (ecVEMPs) and vestibular percepts (vestibulo-thalamo-cortical, VTC) were recorded upon stimulation with single, biphasic current pulses (200 µs/phase) delivered through five different vestibular electrodes. Latencies of eVOR and ecVEMPs were also characterized. Then we compared the amplitude growth functions of the three pathways using different stimulation profiles (1-pulse, 200 µs/phase; 1-pulse, 50 µs/phase; 4-pulses, 50 µs/phase, 1600 pulses-per-second) in one patient (two electrodes). Results: The median latencies of the eVOR and ecVEMPs were 8 ms (8–9 ms) and 10.2 ms (9.6–11.8 ms), respectively. While the amplitude of eVOR and ecVEMP responses increased with increasing stimulation current, the VTC pathway showed a different, step-like behavior. In this study, the 200 µs/phase paradigm appeared to give the best balance to enhance responses at lower stimulation currents. Conclusions: This study is a first attempt to evaluate the simultaneous activation of different vestibular pathways. However, this issue deserves further and more detailed investigation to determine the actual possibility of selective stimulation of a given pathway, as well as the functional impact of the contribution of each pathway to the overall rehabilitation process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background and purpose: Vestibular implants seem to be a promising treatment for patients suffering from severe bilateral vestibulopathy. To optimize outcomes, we need to investigate how, and to which extent, the different vestibular pathways are activated. Here we characterized the simultaneous responses to electrical stimuli of three different vestibular pathways. Methods: Three vestibular implant recipients were included. First, activation thresholds and amplitude growth functions of electrically evoked vestibulo-ocular reflexes (eVOR), cervical myogenic potentials (ecVEMPs) and vestibular percepts (vestibulo-thalamo-cortical, VTC) were recorded upon stimulation with single, biphasic current pulses (200 µs/phase) delivered through five different vestibular electrodes. Latencies of eVOR and ecVEMPs were also characterized. Then we compared the amplitude growth functions of the three pathways using different stimulation profiles (1-pulse, 200 µs/phase; 1-pulse, 50 µs/phase; 4-pulses, 50 µs/phase, 1600 pulses-per-second) in one patient (two electrodes). Results: The median latencies of the eVOR and ecVEMPs were 8 ms (8–9 ms) and 10.2 ms (9.6–11.8 ms), respectively. While the amplitude of eVOR and ecVEMP responses increased with increasing stimulation current, the VTC pathway showed a different, step-like behavior. In this study, the 200 µs/phase paradigm appeared to give the best balance to enhance responses at lower stimulation currents. Conclusions: This study is a first attempt to evaluate the simultaneous activation of different vestibular pathways. However, this issue deserves further and more detailed investigation to determine the actual possibility of selective stimulation of a given pathway, as well as the functional impact of the contribution of each pathway to the overall rehabilitation process.

Close

  • doi:10.1007/s00415-020-10120-1

Close

Christopher D. D. Cabrall; Riender Happee; Joost C. F. De Winter

Prediction of effort and eye movement measures from driving scene components Journal Article

In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 68, pp. 187–197, 2020.

Abstract | Links | BibTeX

@article{Cabrall2020,
title = {Prediction of effort and eye movement measures from driving scene components},
author = {Christopher D. D. Cabrall and Riender Happee and Joost C. F. De Winter},
doi = {10.1016/j.trf.2019.11.001},
year = {2020},
date = {2020-01-01},
journal = {Transportation Research Part F: Traffic Psychology and Behaviour},
volume = {68},
pages = {187--197},
publisher = {Elsevier Ltd},
abstract = {For transitions of control in automated vehicles, driver monitoring systems (DMS) may need to discern task difficulty and driver preparedness. Such DMS require models that relate driving scene components, driver effort, and eye measurements. Across two sessions, 15 participants enacted receiving control within 60 randomly ordered dashcam videos (3-second duration) with variations in visible scene components: road curve angle, road surface area, road users, symbols, infrastructure, and vegetation/trees while their eyes were measured for pupil diameter, fixation duration, and saccade amplitude. The subjective measure of effort and the objective measure of saccade amplitude evidenced the highest correlations (r = 0.34 and r = 0.42, respectively) with the scene component of road curve angle. In person-specific regression analyses combining all visual scene components as predictors, average predictive correlations ranged between 0.49 and 0.58 for subjective effort and between 0.36 and 0.49 for saccade amplitude, depending on cross-validation techniques of generalization and repetition. In conclusion, the present regression equations establish quantifiable relations between visible driving scene components with both subjective effort and objective eye movement measures. In future DMS, such knowledge can help inform road-facing and driver-facing cameras to jointly establish the readiness of would-be drivers ahead of receiving control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

For transitions of control in automated vehicles, driver monitoring systems (DMS) may need to discern task difficulty and driver preparedness. Such DMS require models that relate driving scene components, driver effort, and eye measurements. Across two sessions, 15 participants enacted receiving control within 60 randomly ordered dashcam videos (3-second duration) with variations in visible scene components: road curve angle, road surface area, road users, symbols, infrastructure, and vegetation/trees while their eyes were measured for pupil diameter, fixation duration, and saccade amplitude. The subjective measure of effort and the objective measure of saccade amplitude evidenced the highest correlations (r = 0.34 and r = 0.42, respectively) with the scene component of road curve angle. In person-specific regression analyses combining all visual scene components as predictors, average predictive correlations ranged between 0.49 and 0.58 for subjective effort and between 0.36 and 0.49 for saccade amplitude, depending on cross-validation techniques of generalization and repetition. In conclusion, the present regression equations establish quantifiable relations between visible driving scene components with both subjective effort and objective eye movement measures. In future DMS, such knowledge can help inform road-facing and driver-facing cameras to jointly establish the readiness of would-be drivers ahead of receiving control.

Close

  • doi:10.1016/j.trf.2019.11.001

Close

Andrea Caoli; Silvio P. Sabatini; Agostino Gibaldi; Guido Maiello; Anna Kosovicheva; Peter Bex

A dichoptic feedback-based oculomotor training method to manipulate interocular alignment Journal Article

In: Scientific Reports, vol. 10, pp. 15634, 2020.

Abstract | Links | BibTeX

@article{Caoli2020,
title = {A dichoptic feedback-based oculomotor training method to manipulate interocular alignment},
author = {Andrea Caoli and Silvio P. Sabatini and Agostino Gibaldi and Guido Maiello and Anna Kosovicheva and Peter Bex},
doi = {10.1038/s41598-020-72561-y},
year = {2020},
date = {2020-01-01},
journal = {Scientific Reports},
volume = {10},
pages = {15634},
publisher = {Nature Publishing Group UK},
abstract = {Strabismus is a prevalent impairment of binocular alignment that is associated with a spectrum of perceptual deficits and social disadvantages. Current treatments for strabismus involve ocular alignment through surgical or optical methods and may include vision therapy exercises. In the present study, we explore the potential of real-time dichoptic visual feedback that may be used to quantify and manipulate interocular alignment. A gaze-contingent ring was presented independently to each eye of 11 normally-sighted observers as they fixated a target dot presented only to their dominant eye. Their task was to center the rings within 2° of the target for at least 1 s, with feedback provided by the sizes of the rings. By offsetting the ring in the non-dominant eye temporally or nasally, this task required convergence or divergence, respectively, of the non-dominant eye. Eight of 11 observers attained 5° asymmetric convergence and 3 of 11 attained 3° asymmetric divergence. The results suggest that real-time gaze-contingent feedback may be used to quantify and transiently simulate strabismus and holds promise as a method to augment existing therapies for oculomotor alignment disorders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Strabismus is a prevalent impairment of binocular alignment that is associated with a spectrum of perceptual deficits and social disadvantages. Current treatments for strabismus involve ocular alignment through surgical or optical methods and may include vision therapy exercises. In the present study, we explore the potential of real-time dichoptic visual feedback that may be used to quantify and manipulate interocular alignment. A gaze-contingent ring was presented independently to each eye of 11 normally-sighted observers as they fixated a target dot presented only to their dominant eye. Their task was to center the rings within 2° of the target for at least 1 s, with feedback provided by the sizes of the rings. By offsetting the ring in the non-dominant eye temporally or nasally, this task required convergence or divergence, respectively, of the non-dominant eye. Eight of 11 observers attained 5° asymmetric convergence and 3 of 11 attained 3° asymmetric divergence. The results suggest that real-time gaze-contingent feedback may be used to quantify and transiently simulate strabismus and holds promise as a method to augment existing therapies for oculomotor alignment disorders.

Close

  • doi:10.1038/s41598-020-72561-y

Close

Xianglan Chen; Hulin Ren; Yamin Liu; Bendegul Okumus; Anil Bilgihan

Attention to Chinese menus with metaphorical or metonymic names: An eye movement lab experiment Journal Article

In: International Journal of Hospitality Management, vol. 84, pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Chen2020f,
title = {Attention to Chinese menus with metaphorical or metonymic names: An eye movement lab experiment},
author = {Xianglan Chen and Hulin Ren and Yamin Liu and Bendegul Okumus and Anil Bilgihan},
doi = {10.1016/j.ijhm.2019.05.001},
year = {2020},
date = {2020-01-01},
journal = {International Journal of Hospitality Management},
volume = {84},
pages = {1--10},
publisher = {Elsevier},
abstract = {Food is as cultural as it is practical, and names of dishes accordingly have cultural nuances. Menus serve as communication tools between restaurants and their guests, representing the culinary philosophy of the chefs and proprietors involved. The purpose of this experimental lab study is to compare differences of attention paid to textual and pictorial elements of menus with metaphorical and/or metonymic names. Eye movement technology was applied in a 2 × 3 between-subject experiment (n = 40), comparing the strength of visual metaphors (e.g., images of menu items on the menu) and direct textual names in Chinese and English with regard to guests' willingness to purchase the dishes in question. Post-test questionnaires were also employed to assess participants' attitudes toward menu designs. Study results suggest that visual metaphors are more efficient when reflecting a product's strength. Images are shown to positively influence consumers' expectations of taste and enjoyment, garnering the most attention under all six conditions studied here, and constitute the most effective format when Chinese alone names are present. The textual claim increases perception of the strength of menu items along with purchase intention. Metaphorical dish names with bilingual (i.e., Chinese and English) names hold the greatest appeal. This result can be interpreted from the perspective of grounded cognition theory, which suggests that situated simulations and re-enactment of perceptual, motor, and affective processes can support abstract thought. The lab results and survey provide specific theoretical and managerial implications with regard to translating names of Chinese dishes to attract customers' attention to specific menu items.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Food is as cultural as it is practical, and names of dishes accordingly have cultural nuances. Menus serve as communication tools between restaurants and their guests, representing the culinary philosophy of the chefs and proprietors involved. The purpose of this experimental lab study is to compare differences of attention paid to textual and pictorial elements of menus with metaphorical and/or metonymic names. Eye movement technology was applied in a 2 × 3 between-subject experiment (n = 40), comparing the strength of visual metaphors (e.g., images of menu items on the menu) and direct textual names in Chinese and English with regard to guests' willingness to purchase the dishes in question. Post-test questionnaires were also employed to assess participants' attitudes toward menu designs. Study results suggest that visual metaphors are more efficient when reflecting a product's strength. Images are shown to positively influence consumers' expectations of taste and enjoyment, garnering the most attention under all six conditions studied here, and constitute the most effective format when Chinese alone names are present. The textual claim increases perception of the strength of menu items along with purchase intention. Metaphorical dish names with bilingual (i.e., Chinese and English) names hold the greatest appeal. This result can be interpreted from the perspective of grounded cognition theory, which suggests that situated simulations and re-enactment of perceptual, motor, and affective processes can support abstract thought. The lab results and survey provide specific theoretical and managerial implications with regard to translating names of Chinese dishes to attract customers' attention to specific menu items.

Close

  • doi:10.1016/j.ijhm.2019.05.001

Close

Agnieszka Chmiel; Przemysław Janikowski; Agnieszka Lijewska

Multimodal processing in simultaneous interpreting with text Journal Article

In: Target, vol. 32, no. 1, pp. 37–58, 2020.

Abstract | Links | BibTeX

@article{Chmiel2020,
title = {Multimodal processing in simultaneous interpreting with text},
author = {Agnieszka Chmiel and Przemysław Janikowski and Agnieszka Lijewska},
doi = {10.1075/target.18157.chm},
year = {2020},
date = {2020-01-01},
journal = {Target},
volume = {32},
number = {1},
pages = {37--58},
abstract = {The present study focuses on (in)congruence of input between the visual and the auditory modality in simultaneous interpreting with text. We asked twenty-four professional conference interpreters to simultaneously interpret an aurally and visually presented text with controlled incongruences in three categories (numbers, names and control words), while measuring interpreting accuracy and eye movements. The results provide evidence for the dominance of the visual modality, which goes against the professional standard of following the auditory modality in the case of incongruence. Numbers enjoyed the greatest accuracy across conditions possibly due to simple cross-language semantic mappings. We found no evidence for a facilitation effect for congruent items, and identified an impeding effect of the presence of the visual text for incongruent items. These results might be interpreted either as evidence for the Colavita effect (in which visual stimuli take precedence over auditory ones) or as strategic behaviour applied by professional interpreters to avoid risk.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The present study focuses on (in)congruence of input between the visual and the auditory modality in simultaneous interpreting with text. We asked twenty-four professional conference interpreters to simultaneously interpret an aurally and visually presented text with controlled incongruences in three categories (numbers, names and control words), while measuring interpreting accuracy and eye movements. The results provide evidence for the dominance of the visual modality, which goes against the professional standard of following the auditory modality in the case of incongruence. Numbers enjoyed the greatest accuracy across conditions possibly due to simple cross-language semantic mappings. We found no evidence for a facilitation effect for congruent items, and identified an impeding effect of the presence of the visual text for incongruent items. These results might be interpreted either as evidence for the Colavita effect (in which visual stimuli take precedence over auditory ones) or as strategic behaviour applied by professional interpreters to avoid risk.

Close

  • doi:10.1075/target.18157.chm

Close

Francisco M. Costela; José J. Castro-Torres

Risk prediction model using eye movements during simulated driving with logistic regressions and neural networks Journal Article

In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 74, pp. 511–521, 2020.

Abstract | Links | BibTeX

@article{Costela2020,
title = {Risk prediction model using eye movements during simulated driving with logistic regressions and neural networks},
author = {Francisco M. Costela and José J. Castro-Torres},
doi = {10.1016/j.trf.2020.09.003},
year = {2020},
date = {2020-01-01},
journal = {Transportation Research Part F: Traffic Psychology and Behaviour},
volume = {74},
pages = {511--521},
publisher = {Elsevier Ltd},
abstract = {Background: Many studies have found that eye movement behavior provides a real-time index of mental activity. Risk management architectures embedded in autonomous vehicles fail to include human cognitive aspects. We set out to evaluate whether eye movements during a risk driving detection task are able to predict risk situations. Methods: Thirty-two normally sighted subjects (15 female) saw 20 clips of recorded driving scenes while their gaze was tracked. They reported when they considered the car should brake, anticipating any hazard. We applied both a mixed-effect logistic regression model and feedforward neural networks between hazard reports and eye movement descriptors. Results: All subjects reported at least one major collision hazard in each video (average 3.5 reports). We found that hazard situations were predicted by larger saccades, more and longer fixations, fewer blinks, and a smaller gaze dispersion in both horizontal and vertical dimensions. Performance between models incorporating a different combination of descriptors was compared running a test equality of receiver operating characteristic areas. Feedforward neural networks outperformed logistic regressions in accuracies. The model including saccadic magnitude, fixation duration, dispersion in ×, and pupil returned the highest ROC area (0.73). Conclusion: We evaluated each eye movement descriptor successfully and created separate models that predicted hazard events with an average efficacy of 70% using both logistic regressions and feedforward neural networks. The use of driving simulators and hazard detection videos can be considered a reliable methodology to study risk prediction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Background: Many studies have found that eye movement behavior provides a real-time index of mental activity. Risk management architectures embedded in autonomous vehicles fail to include human cognitive aspects. We set out to evaluate whether eye movements during a risk driving detection task are able to predict risk situations. Methods: Thirty-two normally sighted subjects (15 female) saw 20 clips of recorded driving scenes while their gaze was tracked. They reported when they considered the car should brake, anticipating any hazard. We applied both a mixed-effect logistic regression model and feedforward neural networks between hazard reports and eye movement descriptors. Results: All subjects reported at least one major collision hazard in each video (average 3.5 reports). We found that hazard situations were predicted by larger saccades, more and longer fixations, fewer blinks, and a smaller gaze dispersion in both horizontal and vertical dimensions. Performance between models incorporating a different combination of descriptors was compared running a test equality of receiver operating characteristic areas. Feedforward neural networks outperformed logistic regressions in accuracies. The model including saccadic magnitude, fixation duration, dispersion in ×, and pupil returned the highest ROC area (0.73). Conclusion: We evaluated each eye movement descriptor successfully and created separate models that predicted hazard events with an average efficacy of 70% using both logistic regressions and feedforward neural networks. The use of driving simulators and hazard detection videos can be considered a reliable methodology to study risk prediction.

Close

  • doi:10.1016/j.trf.2020.09.003

Close

Joe Cutting; Paul Cairns

Investigating game attention using the Distraction Recognition Paradigm Journal Article

In: Behaviour and Information Technology, pp. 1–21, 2020.

Abstract | Links | BibTeX

@article{Cutting2020,
title = {Investigating game attention using the Distraction Recognition Paradigm},
author = {Joe Cutting and Paul Cairns},
doi = {10.1080/0144929X.2020.1849402},
year = {2020},
date = {2020-01-01},
journal = {Behaviour and Information Technology},
pages = {1--21},
publisher = {Taylor & Francis},
abstract = {Digital games are well known for holding players' attention and stopping them from being distracted by events around them. Being able to quantify how well games hold attention provides a behavioral foundation for measures of game engagement and a link to existing research on attention. We developed a new behavioral measure of how well games hold attention, based on players' post-game recognition of irrelevant distractors which are shown around the game. This is known as the Distractor Recognition Paradigm (DRP). In two studies we show that the DRP is an effective measure of how well self-paced games hold attention. We show that even simple self-paced games can hold players' attention completely and the consistency of attentional focus is moderated by game engagement. We compare the DRP to existing measures of both attention and engagement and consider how practical it is as a measure of game engagement. We find no evidence that eye tracking is a superior measure of attention to distractor recognition. We discuss existing research on attention and consider implications for areas such as motivation to play and serious games.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Digital games are well known for holding players' attention and stopping them from being distracted by events around them. Being able to quantify how well games hold attention provides a behavioral foundation for measures of game engagement and a link to existing research on attention. We developed a new behavioral measure of how well games hold attention, based on players' post-game recognition of irrelevant distractors which are shown around the game. This is known as the Distractor Recognition Paradigm (DRP). In two studies we show that the DRP is an effective measure of how well self-paced games hold attention. We show that even simple self-paced games can hold players' attention completely and the consistency of attentional focus is moderated by game engagement. We compare the DRP to existing measures of both attention and engagement and consider how practical it is as a measure of game engagement. We find no evidence that eye tracking is a superior measure of attention to distractor recognition. We discuss existing research on attention and consider implications for areas such as motivation to play and serious games.

Close

  • doi:10.1080/0144929X.2020.1849402

Close

Giorgia D'Innocenzo; Alexander V. Nowicky; Daniel T. Bishop

Dynamic task observation: A gaze-mediated complement to traditional action observation treatment? Journal Article

In: Behavioural Brain Research, vol. 379, pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{DInnocenzo2020,
title = {Dynamic task observation: A gaze-mediated complement to traditional action observation treatment?},
author = {Giorgia D'Innocenzo and Alexander V. Nowicky and Daniel T. Bishop},
doi = {10.1016/j.bbr.2019.112351},
year = {2020},
date = {2020-01-01},
journal = {Behavioural Brain Research},
volume = {379},
pages = {1--13},
publisher = {Elsevier},
abstract = {Action observation elicits changes in primary motor cortex known as motor resonance, a phenomenon thought to underpin several functions, including our ability to understand and imitate others' actions. Motor resonance is modulated not only by the observer's motor expertise, but also their gaze behaviour. The aim of the present study was to investigate motor resonance and eye movements during observation of a dynamic goal-directed action, relative to an everyday one – a reach-grasp-lift (RGL) action, commonly used in action-observation-based neurorehabilitation protocols. Skilled and novice golfers watched videos of a golf swing and an RGL action as we recorded MEPs from three forearm muscles; gaze behaviour was concurrently monitored. Corticospinal excitability increased during golf swing observation, but it was not modulated by expertise, relative to baseline; no such changes were observed for the RGL task. MEP amplitudes were related to participants' gaze behaviour: in the RGL condition, target viewing was associated with lower MEP amplitudes; in the golf condition, MEP amplitudes were positively correlated with time spent looking at the effector or neighbouring regions. Viewing of a dynamic action such as the golf swing may enhance action observation treatment, especially when concurrent physical practice is not possible.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Action observation elicits changes in primary motor cortex known as motor resonance, a phenomenon thought to underpin several functions, including our ability to understand and imitate others' actions. Motor resonance is modulated not only by the observer's motor expertise, but also their gaze behaviour. The aim of the present study was to investigate motor resonance and eye movements during observation of a dynamic goal-directed action, relative to an everyday one – a reach-grasp-lift (RGL) action, commonly used in action-observation-based neurorehabilitation protocols. Skilled and novice golfers watched videos of a golf swing and an RGL action as we recorded MEPs from three forearm muscles; gaze behaviour was concurrently monitored. Corticospinal excitability increased during golf swing observation, but it was not modulated by expertise, relative to baseline; no such changes were observed for the RGL task. MEP amplitudes were related to participants' gaze behaviour: in the RGL condition, target viewing was associated with lower MEP amplitudes; in the golf condition, MEP amplitudes were positively correlated with time spent looking at the effector or neighbouring regions. Viewing of a dynamic action such as the golf swing may enhance action observation treatment, especially when concurrent physical practice is not possible.

Close

  • doi:10.1016/j.bbr.2019.112351

Close

Trafton Drew; James Guthrie; Isabel Reback

Worse in real life: An eye-tracking examination of the cost of CAD at low prevalence Journal Article

In: Journal of Experimental Psychology: Applied, vol. 26, no. 4, pp. 659–670, 2020.

Abstract | Links | BibTeX

@article{Drew2020a,
title = {Worse in real life: An eye-tracking examination of the cost of CAD at low prevalence},
author = {Trafton Drew and James Guthrie and Isabel Reback},
doi = {10.1037/xap0000277},
year = {2020},
date = {2020-01-01},
journal = {Journal of Experimental Psychology: Applied},
volume = {26},
number = {4},
pages = {659--670},
abstract = {Computer-aided detection (CAD) is applied during screening mammography for millions of women each year. Despite its popularity, several large studies have observed no benefit in breast cancer detection for practices that use CAD. This lack of benefit may be driven by how CAD information is conveyed to the radiologist. In the current study, we examined this possibility in an artificial task modeled after screening mammography. Prior work at high (50%) target prevalence suggested that CAD marks might disrupt visual attention: Targets that are missed by the CAD system are more likely to be missed by the user. However, targets are much less common in screening mammography. Moreover, the prior work on this topic has focused on simple binary CAD systems that place marks on likely locations, but some modern CAD systems employ interactive CAD (iCAD) systems that may mitigate the previously observed costs. Here, we examined the effects of target prevalence and CAD system. We found that the costs of binary CAD were exacerbated at low prevalence. Meanwhile, iCAD did not lead to a cost on unmarked targets, which suggests that this sort of CAD implementation may be superior to more traditional binary CAD implementations when targets occur infrequently.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Computer-aided detection (CAD) is applied during screening mammography for millions of women each year. Despite its popularity, several large studies have observed no benefit in breast cancer detection for practices that use CAD. This lack of benefit may be driven by how CAD information is conveyed to the radiologist. In the current study, we examined this possibility in an artificial task modeled after screening mammography. Prior work at high (50%) target prevalence suggested that CAD marks might disrupt visual attention: Targets that are missed by the CAD system are more likely to be missed by the user. However, targets are much less common in screening mammography. Moreover, the prior work on this topic has focused on simple binary CAD systems that place marks on likely locations, but some modern CAD systems employ interactive CAD (iCAD) systems that may mitigate the previously observed costs. Here, we examined the effects of target prevalence and CAD system. We found that the costs of binary CAD were exacerbated at low prevalence. Meanwhile, iCAD did not lead to a cost on unmarked targets, which suggests that this sort of CAD implementation may be superior to more traditional binary CAD implementations when targets occur infrequently.

Close

  • doi:10.1037/xap0000277

Close

Camilla E. J. Elphick; Graham E. Pike; Graham J. Hole

You can believe your eyes: Measuring implicit recognition in a lineup with pupillometry Journal Article

In: Psychology, Crime and Law, vol. 26, no. 1, pp. 67–92, 2020.

Abstract | Links | BibTeX

@article{Elphick2020,
title = {You can believe your eyes: Measuring implicit recognition in a lineup with pupillometry},
author = {Camilla E. J. Elphick and Graham E. Pike and Graham J. Hole},
doi = {10.1080/1068316X.2019.1634196},
year = {2020},
date = {2020-01-01},
journal = {Psychology, Crime and Law},
volume = {26},
number = {1},
pages = {67--92},
publisher = {Taylor & Francis},
abstract = {As pupil size is affected by cognitive processes, we investigated whether it could serve as an independent indicator of target recognition in lineups. Participants saw a simulated crime video, followed by two viewings of either a target-present or target-absent video lineup while pupil size was measured with an eye-tracker. Participants who made correct identifications showed significantly larger pupil sizes when viewing the target compared with distractors. Some participants were uncertain about their choice of face from the lineup, but nevertheless showed pupillary changes when viewing the target, suggesting covert recognition of the target face had occurred. The results suggest that pupillometry might be a useful aid in assessing the accuracy of an eyewitness' identification.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

As pupil size is affected by cognitive processes, we investigated whether it could serve as an independent indicator of target recognition in lineups. Participants saw a simulated crime video, followed by two viewings of either a target-present or target-absent video lineup while pupil size was measured with an eye-tracker. Participants who made correct identifications showed significantly larger pupil sizes when viewing the target compared with distractors. Some participants were uncertain about their choice of face from the lineup, but nevertheless showed pupillary changes when viewing the target, suggesting covert recognition of the target face had occurred. The results suggest that pupillometry might be a useful aid in assessing the accuracy of an eyewitness' identification.

Close

  • doi:10.1080/1068316X.2019.1634196

Close

Gemma Fitzsimmons; Lewis T. Jayes; Mark J. Weal; Denis Drieghe

The impact of skim reading and navigation when reading hyperlinks on the web Journal Article

In: PLoS ONE, vol. 15, no. 9, pp. e0239134, 2020.

Abstract | Links | BibTeX

@article{Fitzsimmons2020,
title = {The impact of skim reading and navigation when reading hyperlinks on the web},
author = {Gemma Fitzsimmons and Lewis T. Jayes and Mark J. Weal and Denis Drieghe},
doi = {10.1371/journal.pone.0239134},
year = {2020},
date = {2020-01-01},
journal = {PLoS ONE},
volume = {15},
number = {9},
pages = {e0239134},
abstract = {It has been shown that readers spend a great deal of time skim reading on the Web and that this type of reading can affect lexical processing of words. Across two experiments, we utilised eye tracking methodology to explore how hyperlinks and navigating webpages affect reading behaviour. In Experiment 1, participants read static Webpages either for comprehension or whilst skim reading, while in Experiment 2, participants additionally read through a navigable Web environment. Embedded target words were either hyperlinks or not and were either high-frequency or low-frequency words. Results from Experiment 1 show that while readers lexically process both linked and unlinked words when reading for comprehension, readers only fully lexically process linked words when skim reading, as was evidenced by a frequency effect that was absent for the unlinked words. They did fully lexically process both linked and unlinked words when reading for comprehension. In Experiment 2, which allowed for navigating, readers only fully lexically processed linked words compared to unlinked words, regardless of whether they were skim reading or reading for comprehension. We suggest that readers engage in an efficient reading strategy where they attempt to minimise comprehension loss while maintaining a high reading speed. Readers use hyperlinks as markers to suggest important information and use them to navigate through the text in an efficient and effective way. The task of reading on the Web causes readers to lexically process words in a markedly different way from typical reading experiments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

It has been shown that readers spend a great deal of time skim reading on the Web and that this type of reading can affect lexical processing of words. Across two experiments, we utilised eye tracking methodology to explore how hyperlinks and navigating webpages affect reading behaviour. In Experiment 1, participants read static Webpages either for comprehension or whilst skim reading, while in Experiment 2, participants additionally read through a navigable Web environment. Embedded target words were either hyperlinks or not and were either high-frequency or low-frequency words. Results from Experiment 1 show that while readers lexically process both linked and unlinked words when reading for comprehension, readers only fully lexically process linked words when skim reading, as was evidenced by a frequency effect that was absent for the unlinked words. They did fully lexically process both linked and unlinked words when reading for comprehension. In Experiment 2, which allowed for navigating, readers only fully lexically processed linked words compared to unlinked words, regardless of whether they were skim reading or reading for comprehension. We suggest that readers engage in an efficient reading strategy where they attempt to minimise comprehension loss while maintaining a high reading speed. Readers use hyperlinks as markers to suggest important information and use them to navigate through the text in an efficient and effective way. The task of reading on the Web causes readers to lexically process words in a markedly different way from typical reading experiments.

Close

  • doi:10.1371/journal.pone.0239134

Close

Mathilda Froesel; Quentin Goudard; Marc Hauser; Maëva Gacoin; Suliann Ben Hamed

Automated video-based heart rate tracking for the anesthetized and behaving monkey Journal Article

In: Scientific Reports, vol. 10, pp. 17940, 2020.

Abstract | Links | BibTeX

@article{Froesel2020,
title = {Automated video-based heart rate tracking for the anesthetized and behaving monkey},
author = {Mathilda Froesel and Quentin Goudard and Marc Hauser and Maëva Gacoin and Suliann Ben Hamed},
doi = {10.1038/s41598-020-74954-5},
year = {2020},
date = {2020-01-01},
journal = {Scientific Reports},
volume = {10},
pages = {17940},
publisher = {Nature Publishing Group UK},
abstract = {Heart rate (HR) is extremely valuable in the study of complex behaviours and their physiological correlates in non-human primates. However, collecting this information is often challenging, involving either invasive implants or tedious behavioural training. In the present study, we implement a Eulerian video magnification (EVM) heart tracking method in the macaque monkey combined with wavelet transform. This is based on a measure of image to image fluctuations in skin reflectance due to changes in blood influx. We show a strong temporal coherence and amplitude match between EVM-based heart tracking and ground truth ECG, from both color (RGB) and infrared (IR) videos, in anesthetized macaques, to a level comparable to what can be achieved in humans. We further show that this method allows to identify consistent HR changes following the presentation of conspecific emotional voices or faces. EVM is used to extract HR in humans but has never been applied to non-human primates. Video photoplethysmography allows to extract awake macaques HR from RGB videos. In contrast, our method allows to extract awake macaques HR from both RGB and IR videos and is particularly resilient to the head motion that can be observed in awake behaving monkeys. Overall, we believe that this method can be generalized as a tool to track HR of the awake behaving monkey, for ethological, behavioural, neuroscience or welfare purposes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Heart rate (HR) is extremely valuable in the study of complex behaviours and their physiological correlates in non-human primates. However, collecting this information is often challenging, involving either invasive implants or tedious behavioural training. In the present study, we implement a Eulerian video magnification (EVM) heart tracking method in the macaque monkey combined with wavelet transform. This is based on a measure of image to image fluctuations in skin reflectance due to changes in blood influx. We show a strong temporal coherence and amplitude match between EVM-based heart tracking and ground truth ECG, from both color (RGB) and infrared (IR) videos, in anesthetized macaques, to a level comparable to what can be achieved in humans. We further show that this method allows to identify consistent HR changes following the presentation of conspecific emotional voices or faces. EVM is used to extract HR in humans but has never been applied to non-human primates. Video photoplethysmography allows to extract awake macaques HR from RGB videos. In contrast, our method allows to extract awake macaques HR from both RGB and IR videos and is particularly resilient to the head motion that can be observed in awake behaving monkeys. Overall, we believe that this method can be generalized as a tool to track HR of the awake behaving monkey, for ethological, behavioural, neuroscience or welfare purposes.

Close

  • doi:10.1038/s41598-020-74954-5

Close

Erin T. Gannon; Michael A. Grubb

How filmmakers guide the eye: The effect of average shot length on intersubject attentional synchrony Journal Article

In: Psychology of Aesthetics, Creativity, and the Arts, pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Gannon2020,
title = {How filmmakers guide the eye: The effect of average shot length on intersubject attentional synchrony},
author = {Erin T. Gannon and Michael A. Grubb},
doi = {10.1037/aca0000315},
year = {2020},
date = {2020-01-01},
journal = {Psychology of Aesthetics, Creativity, and the Arts},
pages = {1--10},
abstract = {As editing technology has advanced, filmmakers have become increasingly skilled at manipulating overt attention such that eye movements are highly synchronized during film viewing. Average shot length (ASL; film length/number of shots) is a quantitative metric in film studies that may help us understand this perceptual phenomenon. Since shorter shots give viewers less time to voluntarily scan images, we predicted that shorter ASLs would yield greater attentional synchrony across viewers. We recorded participants' eye movements as they viewed clips from commercially produced films with varying ASLs, and in line with our hypothesis, we found that ASL and attentional synchrony were negatively related. These findings were replicated in an independent sample of participants who viewed a different set of clips from the same films used in Experiment 1. Comparing across experiments, we found that within the same films, clips with shorter ASLs synchronized eye movements to a greater extent than did clips with longer ASLs. Studies of film perception have long implied that ASL modulates eye movements across viewers, and this study provides robust empirical evidence to support that claim.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

As editing technology has advanced, filmmakers have become increasingly skilled at manipulating overt attention such that eye movements are highly synchronized during film viewing. Average shot length (ASL; film length/number of shots) is a quantitative metric in film studies that may help us understand this perceptual phenomenon. Since shorter shots give viewers less time to voluntarily scan images, we predicted that shorter ASLs would yield greater attentional synchrony across viewers. We recorded participants' eye movements as they viewed clips from commercially produced films with varying ASLs, and in line with our hypothesis, we found that ASL and attentional synchrony were negatively related. These findings were replicated in an independent sample of participants who viewed a different set of clips from the same films used in Experiment 1. Comparing across experiments, we found that within the same films, clips with shorter ASLs synchronized eye movements to a greater extent than did clips with longer ASLs. Studies of film perception have long implied that ASL modulates eye movements across viewers, and this study provides robust empirical evidence to support that claim.

Close

  • doi:10.1037/aca0000315

Close

Alexander Goettker; Kevin J. MacKenzie; T. Scott Murdison

Differences between oculomotor and perceptual artifacts for temporally limited head mounted displays Journal Article

In: Journal of the Society for Information Display, vol. 28, no. 6, pp. 509–519, 2020.

Abstract | Links | BibTeX

@article{Goettker2020b,
title = {Differences between oculomotor and perceptual artifacts for temporally limited head mounted displays},
author = {Alexander Goettker and Kevin J. MacKenzie and T. Scott Murdison},
doi = {10.1002/jsid.912},
year = {2020},
date = {2020-01-01},
journal = {Journal of the Society for Information Display},
volume = {28},
number = {6},
pages = {509--519},
abstract = {We used perceptual and oculomotor measures to understand the negative impacts of low (phantom array) and high (motion blur) duty cycles with a high-speed, AR-likehead-mounted display prototype. We observed large intersubject variability for the detection of phantom array artifacts but a highly consistent and systematic effect on saccadic eye movement targeting during low duty cycle presentations. This adverse effect on saccade endpoints was also related to an increased error rate in a perceptual discrimination task, showing a direct effect of display duty cycle on the perceptual quality. For high duty cycles, the probability of detecting motion blur increased during head movements, and this effect was elevated at lower refresh rates. We did not find an impact of the temporal display characteristics on compensatory eye movements during head motion (e.g., VOR). Together, our results allow us to quantify the tradeoff of different negative spatiotemporal impacts of user movements and make subsequent recommendations for optimized temporal HMD parameters.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We used perceptual and oculomotor measures to understand the negative impacts of low (phantom array) and high (motion blur) duty cycles with a high-speed, AR-likehead-mounted display prototype. We observed large intersubject variability for the detection of phantom array artifacts but a highly consistent and systematic effect on saccadic eye movement targeting during low duty cycle presentations. This adverse effect on saccade endpoints was also related to an increased error rate in a perceptual discrimination task, showing a direct effect of display duty cycle on the perceptual quality. For high duty cycles, the probability of detecting motion blur increased during head movements, and this effect was elevated at lower refresh rates. We did not find an impact of the temporal display characteristics on compensatory eye movements during head motion (e.g., VOR). Together, our results allow us to quantify the tradeoff of different negative spatiotemporal impacts of user movements and make subsequent recommendations for optimized temporal HMD parameters.

Close

  • doi:10.1002/jsid.912

Close

Andrea Grant; Gregory J. Metzger; Pierre François Van de Moortele; Gregor Adriany; Cheryl Olman; Lin Zhang; Joseph Koopermeiners; Yiğitcan Eryaman; Margaret Koeritzer; Meredith E. Adams; Thomas R. Henry; Kamil Uğurbil

10.5 T MRI static field effects on human cognitive, vestibular, and physiological function Journal Article

In: Magnetic Resonance Imaging, vol. 73, pp. 163–176, 2020.

Abstract | Links | BibTeX

@article{Grant2020,
title = {10.5 T MRI static field effects on human cognitive, vestibular, and physiological function},
author = {Andrea Grant and Gregory J. Metzger and Pierre François Van de Moortele and Gregor Adriany and Cheryl Olman and Lin Zhang and Joseph Koopermeiners and Yiğitcan Eryaman and Margaret Koeritzer and Meredith E. Adams and Thomas R. Henry and Kamil Uğurbil},
doi = {10.1016/j.mri.2020.08.004},
year = {2020},
date = {2020-01-01},
journal = {Magnetic Resonance Imaging},
volume = {73},
pages = {163--176},
publisher = {Elsevier},
abstract = {Purpose: To perform a pilot study to quantitatively assess cognitive, vestibular, and physiological function during and after exposure to a magnetic resonance imaging (MRI) system with a static field strength of 10.5 Tesla at multiple time scales. Methods: A total of 29 subjects were exposed to a 10.5 T MRI field and underwent vestibular, cognitive, and physiological testing before, during, and after exposure; for 26 subjects, testing and exposure were repeated within 2–4 weeks of the first visit. Subjects also reported sensory perceptions after each exposure. Comparisons were made between short and long term time points in the study with respect to the parameters measured in the study; short term comparison included pre-vs-isocenter and pre-vs-post (1–24 h), while long term compared pre-exposures 2–4 weeks apart. Results: Of the 79 comparisons, 73 parameters were unchanged or had small improvements after magnet exposure. The exceptions to this included lower scores on short term (i.e. same day) executive function testing, greater isocenter spontaneous eye movement during visit 1 (relative to pre-exposure), increased number of abnormalities on videonystagmography visit 2 versus visit 1 and a mix of small increases (short term visit 2) and decreases (short term visit 1) in blood pressure. In addition, more subjects reported metallic taste at 10.5 T in comparison to similar data obtained in previous studies at 7 T and 9.4 T. Conclusion: Initial results of 10.5 T static field exposure indicate that 1) cognitive performance is not compromised at isocenter, 2) subjects experience increased eye movement at isocenter, and 3) subjects experience small changes in vital signs but no field-induced increase in blood pressure. While small but significant differences were found in some comparisons, none were identified as compromising subject safety. A modified testing protocol informed by these results was devised with the goal of permitting increased enrollment while providing continued monitoring to evaluate field effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: To perform a pilot study to quantitatively assess cognitive, vestibular, and physiological function during and after exposure to a magnetic resonance imaging (MRI) system with a static field strength of 10.5 Tesla at multiple time scales. Methods: A total of 29 subjects were exposed to a 10.5 T MRI field and underwent vestibular, cognitive, and physiological testing before, during, and after exposure; for 26 subjects, testing and exposure were repeated within 2–4 weeks of the first visit. Subjects also reported sensory perceptions after each exposure. Comparisons were made between short and long term time points in the study with respect to the parameters measured in the study; short term comparison included pre-vs-isocenter and pre-vs-post (1–24 h), while long term compared pre-exposures 2–4 weeks apart. Results: Of the 79 comparisons, 73 parameters were unchanged or had small improvements after magnet exposure. The exceptions to this included lower scores on short term (i.e. same day) executive function testing, greater isocenter spontaneous eye movement during visit 1 (relative to pre-exposure), increased number of abnormalities on videonystagmography visit 2 versus visit 1 and a mix of small increases (short term visit 2) and decreases (short term visit 1) in blood pressure. In addition, more subjects reported metallic taste at 10.5 T in comparison to similar data obtained in previous studies at 7 T and 9.4 T. Conclusion: Initial results of 10.5 T static field exposure indicate that 1) cognitive performance is not compromised at isocenter, 2) subjects experience increased eye movement at isocenter, and 3) subjects experience small changes in vital signs but no field-induced increase in blood pressure. While small but significant differences were found in some comparisons, none were identified as compromising subject safety. A modified testing protocol informed by these results was devised with the goal of permitting increased enrollment while providing continued monitoring to evaluate field effects.

Close

  • doi:10.1016/j.mri.2020.08.004

Close

Agnes Hardardottir; Mohammed Al-Hamdani; Raymond Klein; Austin Hurst; Sherry H. Stewart

The effect of cigarette packaging and illness sensitivity on attention to graphic health warnings: A controlled study Journal Article

In: Nicotine & Tobacco Research, vol. 22, no. 10, pp. 1788–1794, 2020.

Abstract | Links | BibTeX

@article{Hardardottir2020,
title = {The effect of cigarette packaging and illness sensitivity on attention to graphic health warnings: A controlled study},
author = {Agnes Hardardottir and Mohammed Al-Hamdani and Raymond Klein and Austin Hurst and Sherry H. Stewart},
doi = {10.1093/ntr/ntz243},
year = {2020},
date = {2020-01-01},
journal = {Nicotine & Tobacco Research},
volume = {22},
number = {10},
pages = {1788--1794},
abstract = {INTRODUCTION: The social and health care costs of smoking are immense. To reduce these costs, several tobacco control policies have been introduced (eg, graphic health warnings [GHWs] on cigarette packs). Previous research has found plain packaging (a homogenized form of packaging), in comparison to branded packaging, effectively increases attention to GHWs using UK packaging prototypes. Past studies have also found that illness sensitivity (IS) protects against health-impairing behaviors. Building on this evidence, the goal of the current study was to assess the effect of packaging type (plain vs. branded), IS level, and their interaction on attention to GHWs on cigarette packages using proposed Canadian prototypes. AIMS AND METHODS: We assessed the dwell time and fixations on the GHW component of 40 cigarette pack stimuli (20 branded; 20 plain). Stimuli were presented in random order to 50 smokers (60.8% male; mean age = 33.1; 92.2% daily smokers) using the EyeLink 1000 system. Participants were divided into low IS (n = 25) and high IS (n = 25) groups based on scores on the Illness Sensitivity Index. RESULTS: Overall, plain packaging relative to branded packaging increased fixations (but not dwell time) on GHWs. Moreover, low IS (but not high IS) smokers showed more fixations to GHWs on plain versus branded packages. CONCLUSIONS: These findings demonstrate that plain packaging is a promising intervention for daily smokers, particularly those low in IS, and contribute evidence in support of impending implementation of plain packaging in Canada. IMPLICATIONS: Our findings have three important implications. First, our study provides controlled experimental evidence that plain packaging is a promising intervention for daily smokers. Second, the findings of this study contribute supportive evidence for the impending plain packaging policy in Canada, and can therefore aid in defense against anticipated challenges from the tobacco industry upon its implementation. Third, given its effects in increasing attention to GHWs, plain packaging is an intervention likely to provide smokers enhanced incentive for smoking cessation, particularly among those low in IS who may otherwise be less interested in seeking treatment for tobacco dependence.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

INTRODUCTION: The social and health care costs of smoking are immense. To reduce these costs, several tobacco control policies have been introduced (eg, graphic health warnings [GHWs] on cigarette packs). Previous research has found plain packaging (a homogenized form of packaging), in comparison to branded packaging, effectively increases attention to GHWs using UK packaging prototypes. Past studies have also found that illness sensitivity (IS) protects against health-impairing behaviors. Building on this evidence, the goal of the current study was to assess the effect of packaging type (plain vs. branded), IS level, and their interaction on attention to GHWs on cigarette packages using proposed Canadian prototypes. AIMS AND METHODS: We assessed the dwell time and fixations on the GHW component of 40 cigarette pack stimuli (20 branded; 20 plain). Stimuli were presented in random order to 50 smokers (60.8% male; mean age = 33.1; 92.2% daily smokers) using the EyeLink 1000 system. Participants were divided into low IS (n = 25) and high IS (n = 25) groups based on scores on the Illness Sensitivity Index. RESULTS: Overall, plain packaging relative to branded packaging increased fixations (but not dwell time) on GHWs. Moreover, low IS (but not high IS) smokers showed more fixations to GHWs on plain versus branded packages. CONCLUSIONS: These findings demonstrate that plain packaging is a promising intervention for daily smokers, particularly those low in IS, and contribute evidence in support of impending implementation of plain packaging in Canada. IMPLICATIONS: Our findings have three important implications. First, our study provides controlled experimental evidence that plain packaging is a promising intervention for daily smokers. Second, the findings of this study contribute supportive evidence for the impending plain packaging policy in Canada, and can therefore aid in defense against anticipated challenges from the tobacco industry upon its implementation. Third, given its effects in increasing attention to GHWs, plain packaging is an intervention likely to provide smokers enhanced incentive for smoking cessation, particularly among those low in IS who may otherwise be less interested in seeking treatment for tobacco dependence.

Close

  • doi:10.1093/ntr/ntz243

Close

Claudia R. Hebert; Li Z. Sha; Roger W. Remington; Yuhong V. Jiang

Redundancy gain in visual search of simulated X-ray images Journal Article

In: Attention, Perception, and Psychophysics, vol. 82, no. 4, pp. 1669–1681, 2020.

Abstract | Links | BibTeX

@article{Hebert2020,
title = {Redundancy gain in visual search of simulated X-ray images},
author = {Claudia R. Hebert and Li Z. Sha and Roger W. Remington and Yuhong V. Jiang},
doi = {10.3758/s13414-019-01934-x},
year = {2020},
date = {2020-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {82},
number = {4},
pages = {1669--1681},
publisher = {Attention, Perception, & Psychophysics},
abstract = {Cancer diagnosis frequently relies on the interpretation of medical images such as chest X-rays and mammography. This process is error prone; misdiagnoses can reach a rate of 15% or higher. Of particular interest are false negatives—tumors that are present but missed. Previous research has identified several perceptual and attentional problems underlying inaccurate perception of these images. But how might these problems be reduced? The psychological literature has shown that presenting multiple, duplicate images can improve performance. Here we explored whether redundant image presentation can improve target detection in simulated X-ray images, by presenting four identical or similar images concurrently. Displays with redundant images, including duplicates of the same image, showed reduced false-negative rates, compared with displays with a single image. This effect held both when the target's prevalence rate was high and when it was low. Eye tracking showed that fixating on two or more images in the redundant condition speeded target detection and prolonged search, and that the latter effect was the key to reducing false negatives. The redundancy gain may result from both perceptual enhancement and an increase in the search quitting threshold.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Cancer diagnosis frequently relies on the interpretation of medical images such as chest X-rays and mammography. This process is error prone; misdiagnoses can reach a rate of 15% or higher. Of particular interest are false negatives—tumors that are present but missed. Previous research has identified several perceptual and attentional problems underlying inaccurate perception of these images. But how might these problems be reduced? The psychological literature has shown that presenting multiple, duplicate images can improve performance. Here we explored whether redundant image presentation can improve target detection in simulated X-ray images, by presenting four identical or similar images concurrently. Displays with redundant images, including duplicates of the same image, showed reduced false-negative rates, compared with displays with a single image. This effect held both when the target's prevalence rate was high and when it was low. Eye tracking showed that fixating on two or more images in the redundant condition speeded target detection and prolonged search, and that the latter effect was the key to reducing false negatives. The redundancy gain may result from both perceptual enhancement and an increase in the search quitting threshold.

Close

  • doi:10.3758/s13414-019-01934-x

Close

Jay Hegdé

Deep learning can be used to train naïve, nonprofessional observers to detect diagnostic visual patterns of certain cancers in mammograms: A proof-of-principle study Journal Article

In: Journal of Medical Imaging, vol. 7, no. 2, pp. 1–22, 2020.

Abstract | Links | BibTeX

@article{Hegde2020,
title = {Deep learning can be used to train naïve, nonprofessional observers to detect diagnostic visual patterns of certain cancers in mammograms: A proof-of-principle study},
author = {Jay Hegdé},
doi = {10.1117/1.jmi.7.2.022410},
year = {2020},
date = {2020-01-01},
journal = {Journal of Medical Imaging},
volume = {7},
number = {2},
pages = {1--22},
abstract = {The scientific, clinical, and pedagogical significance of devising methodologies to train nonprofessional subjects to recognize diagnostic visual patterns in medical images has been broadly recognized. However, systematic approaches to doing so remain poorly established. Using mammography as an exemplar case, we use a series of experiments to demonstrate that deep learning (DL) techniques can, in principle, be used to train naïve subjects to reliably detect certain diagnostic visual patterns of cancer in medical images. In the main experiment, subjects were required to learn to detect statistical visual patterns diagnostic of cancer in mammograms using only the mammograms and feedback provided following the subjects' response. We found not only that the subjects learned to perform the task at statistically significant levels, but also that their eye movements related to image scrutiny changed in a learning-dependent fashion. Two additional, smaller exploratory experiments suggested that allowing subjects to re-examine the mammogram in light of various items of diagnostic information may help further improve DL of the diagnostic patterns. Finally, a fourth small, exploratory experiment suggested that the image information learned was similar across subjects. Together, these results prove the principle that DL methodologies can be used to train nonprofessional subjects to reliably perform those aspects of medical image perception tasks that depend on visual pattern recognition expertise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The scientific, clinical, and pedagogical significance of devising methodologies to train nonprofessional subjects to recognize diagnostic visual patterns in medical images has been broadly recognized. However, systematic approaches to doing so remain poorly established. Using mammography as an exemplar case, we use a series of experiments to demonstrate that deep learning (DL) techniques can, in principle, be used to train naïve subjects to reliably detect certain diagnostic visual patterns of cancer in medical images. In the main experiment, subjects were required to learn to detect statistical visual patterns diagnostic of cancer in mammograms using only the mammograms and feedback provided following the subjects' response. We found not only that the subjects learned to perform the task at statistically significant levels, but also that their eye movements related to image scrutiny changed in a learning-dependent fashion. Two additional, smaller exploratory experiments suggested that allowing subjects to re-examine the mammogram in light of various items of diagnostic information may help further improve DL of the diagnostic patterns. Finally, a fourth small, exploratory experiment suggested that the image information learned was similar across subjects. Together, these results prove the principle that DL methodologies can be used to train nonprofessional subjects to reliably perform those aspects of medical image perception tasks that depend on visual pattern recognition expertise.

Close

  • doi:10.1117/1.jmi.7.2.022410

Close

David R. Howell; Anna N. Brilliant; Christina L. Master; William P. Meehan

Reliability of objective eye-tracking measures among healthy adolescent athletes Journal Article

In: Clinical Journal of Sport Medicine, vol. 30, no. 5, pp. 444–450, 2020.

Abstract | Links | BibTeX

@article{Howell2020,
title = {Reliability of objective eye-tracking measures among healthy adolescent athletes},
author = {David R. Howell and Anna N. Brilliant and Christina L. Master and William P. Meehan},
doi = {10.1097/JSM.0000000000000630},
year = {2020},
date = {2020-01-01},
journal = {Clinical Journal of Sport Medicine},
volume = {30},
number = {5},
pages = {444--450},
abstract = {OBJECTIVE: To determine the test-retest correlation of an objective eye-tracking device among uninjured youth athletes. DESIGN: Repeated-measures study. SETTING: Sports-medicine clinic. PARTICIPANTS: Healthy youth athletes (mean age = 14.6 ± 2.2 years; 39% women) completed a brief, automated, and objective eye-tracking assessment. INDEPENDENT VARIABLES: Participants completed the eye-tracking assessment at 2 different testing sessions. MAIN OUTCOME MEASURES: During the assessment, participants watched a 220-second video clip while it moved around a computer monitor in a clockwise direction as an eye tracker recorded eye movements. We obtained 13 eye movement outcome variables and assessed correlations between the assessments made at the 2 time points using Spearman's Rho (rs). RESULTS: Thirty-one participants completed the eye-tracking evaluation at 2 time points [median = 7 (interquartile range = 6-9) days between tests]. No significant differences in outcomes were found between the 2 testing times. Several eye movement variables demonstrated moderate to moderately high test-retest reliability. Combined eye conjugacy metric (BOX score},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

OBJECTIVE: To determine the test-retest correlation of an objective eye-tracking device among uninjured youth athletes. DESIGN: Repeated-measures study. SETTING: Sports-medicine clinic. PARTICIPANTS: Healthy youth athletes (mean age = 14.6 ± 2.2 years; 39% women) completed a brief, automated, and objective eye-tracking assessment. INDEPENDENT VARIABLES: Participants completed the eye-tracking assessment at 2 different testing sessions. MAIN OUTCOME MEASURES: During the assessment, participants watched a 220-second video clip while it moved around a computer monitor in a clockwise direction as an eye tracker recorded eye movements. We obtained 13 eye movement outcome variables and assessed correlations between the assessments made at the 2 time points using Spearman's Rho (rs). RESULTS: Thirty-one participants completed the eye-tracking evaluation at 2 time points [median = 7 (interquartile range = 6-9) days between tests]. No significant differences in outcomes were found between the 2 testing times. Several eye movement variables demonstrated moderate to moderately high test-retest reliability. Combined eye conjugacy metric (BOX score

Close

  • doi:10.1097/JSM.0000000000000630

Close

Sabrina Karl; Magdalena Boch; Zsófia Virányi; Claus Lamm; Ludwig Huber

Training pet dogs for eye-tracking and awake fMRI Journal Article

In: Behavior Research Methods, vol. 52, no. 2, pp. 838–856, 2020.

Abstract | Links | BibTeX

@article{Karl2020a,
title = {Training pet dogs for eye-tracking and awake fMRI},
author = {Sabrina Karl and Magdalena Boch and Zsófia Virányi and Claus Lamm and Ludwig Huber},
doi = {10.3758/s13428-019-01281-7},
year = {2020},
date = {2020-01-01},
journal = {Behavior Research Methods},
volume = {52},
number = {2},
pages = {838--856},
publisher = {Mann},
abstract = {In recent years, two well-developed methods of studying mental processes in humans have been successively applied to dogs. First, eye-tracking has been used to study visual cognition without distraction in unrestrained dogs. Second, noninvasive functional magnetic resonance imaging (fMRI) has been used for assessing the brain functions of dogs in vivo. Both methods, however, require dogs to sit, stand, or lie motionless while yet remaining attentive for several minutes, during which time their brain activity and eye movements are measured. Whereas eye-tracking in dogs is performed in a quiet and, apart from the experimental stimuli, nonstimulating and highly controlled environment, MRI scanning can only be performed in a very noisy and spatially restraining MRI scanner, in which dogs need to feel relaxed and stay motionless in order to study their brain and cognition with high precision. Here we describe in detail a training regime that is perfectly suited to train dogs in the required skills, with a high success probability and while keeping to the highest ethical standards of animal welfare—that is, without using aversive training methods or any other compromises to the dog's well-being for both methods. By reporting data from 41 dogs that successfully participated in eye-tracking training and 24 dogs IN fMRI training, we provide robust qualitative and quantitative evidence for the quality and efficiency of our training methods. By documenting and validating our training approach here, we aim to inspire others to use our methods to apply eye-tracking or fMRI for their investigations of canine behavior and cognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In recent years, two well-developed methods of studying mental processes in humans have been successively applied to dogs. First, eye-tracking has been used to study visual cognition without distraction in unrestrained dogs. Second, noninvasive functional magnetic resonance imaging (fMRI) has been used for assessing the brain functions of dogs in vivo. Both methods, however, require dogs to sit, stand, or lie motionless while yet remaining attentive for several minutes, during which time their brain activity and eye movements are measured. Whereas eye-tracking in dogs is performed in a quiet and, apart from the experimental stimuli, nonstimulating and highly controlled environment, MRI scanning can only be performed in a very noisy and spatially restraining MRI scanner, in which dogs need to feel relaxed and stay motionless in order to study their brain and cognition with high precision. Here we describe in detail a training regime that is perfectly suited to train dogs in the required skills, with a high success probability and while keeping to the highest ethical standards of animal welfare—that is, without using aversive training methods or any other compromises to the dog's well-being for both methods. By reporting data from 41 dogs that successfully participated in eye-tracking training and 24 dogs IN fMRI training, we provide robust qualitative and quantitative evidence for the quality and efficiency of our training methods. By documenting and validating our training approach here, we aim to inspire others to use our methods to apply eye-tracking or fMRI for their investigations of canine behavior and cognition.

Close

  • doi:10.3758/s13428-019-01281-7

Close

Sabrina Karl; Magdalena Boch; Anna Zamansky; Dirk Linden; Isabella C. Wagner; Christoph J. Völter; Claus Lamm; Ludwig Huber

Exploring the dog–human relationship by combining fMRI, eye-tracking and behavioural measures Journal Article

In: Scientific Reports, vol. 10, pp. 22273, 2020.

Abstract | Links | BibTeX

@article{Karl2020,
title = {Exploring the dog–human relationship by combining fMRI, eye-tracking and behavioural measures},
author = {Sabrina Karl and Magdalena Boch and Anna Zamansky and Dirk Linden and Isabella C. Wagner and Christoph J. Völter and Claus Lamm and Ludwig Huber},
doi = {10.1038/s41598-020-79247-5},
year = {2020},
date = {2020-01-01},
journal = {Scientific Reports},
volume = {10},
pages = {22273},
publisher = {Nature Publishing Group UK},
abstract = {Behavioural studies revealed that the dog–human relationship resembles the human mother–child bond, but the underlying mechanisms remain unclear. Here, we report the results of a multi-method approach combining fMRI (N = 17), eye-tracking (N = 15), and behavioural preference tests (N = 24) to explore the engagement of an attachment-like system in dogs seeing human faces. We presented morph videos of the caregiver, a familiar person, and a stranger showing either happy or angry facial expressions. Regardless of emotion, viewing the caregiver activated brain regions associated with emotion and attachment processing in humans. In contrast, the stranger elicited activation mainly in brain regions related to visual and motor processing, and the familiar person relatively weak activations overall. While the majority of happy stimuli led to increased activation of the caudate nucleus associated with reward processing, angry stimuli led to activations in limbic regions. Both the eye-tracking and preference test data supported the superior role of the caregiver's face and were in line with the findings from the fMRI experiment. While preliminary, these findings indicate that cutting across different levels, from brain to behaviour, can provide novel and converging insights into the engagement of the putative attachment system when dogs interact with humans.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Behavioural studies revealed that the dog–human relationship resembles the human mother–child bond, but the underlying mechanisms remain unclear. Here, we report the results of a multi-method approach combining fMRI (N = 17), eye-tracking (N = 15), and behavioural preference tests (N = 24) to explore the engagement of an attachment-like system in dogs seeing human faces. We presented morph videos of the caregiver, a familiar person, and a stranger showing either happy or angry facial expressions. Regardless of emotion, viewing the caregiver activated brain regions associated with emotion and attachment processing in humans. In contrast, the stranger elicited activation mainly in brain regions related to visual and motor processing, and the familiar person relatively weak activations overall. While the majority of happy stimuli led to increased activation of the caudate nucleus associated with reward processing, angry stimuli led to activations in limbic regions. Both the eye-tracking and preference test data supported the superior role of the caregiver's face and were in line with the findings from the fMRI experiment. While preliminary, these findings indicate that cutting across different levels, from brain to behaviour, can provide novel and converging insights into the engagement of the putative attachment system when dogs interact with humans.

Close

  • doi:10.1038/s41598-020-79247-5

Close

Josiah P. J. King; Jia E. Loy; Hannah Rohde; Martin Corley

Interpreting nonverbal cues to deception in real time Journal Article

In: PLoS ONE, vol. 15, no. 3, pp. e0229486, 2020.

Abstract | Links | BibTeX

@article{King2020,
title = {Interpreting nonverbal cues to deception in real time},
author = {Josiah P. J. King and Jia E. Loy and Hannah Rohde and Martin Corley},
doi = {10.1371/journal.pone.0229486},
year = {2020},
date = {2020-01-01},
journal = {PLoS ONE},
volume = {15},
number = {3},
pages = {e0229486},
abstract = {When questioning the veracity of an utterance, we perceive certain non-linguistic behaviours to indicate that a speaker is being deceptive. Recent work has highlighted that listeners' associations between speech disfluency and dishonesty are detectable at the earliest stages of reference comprehension, suggesting that the manner of spoken delivery influences pragmatic judgements concurrently with the processing of lexical information. Here, we investigate the integration of a speaker's gestures into judgements of deception, and ask if and when associations between nonverbal cues and deception emerge. Participants saw and heard a video of a potentially dishonest speaker describe treasure hidden behind an object, while also viewing images of both the named object and a distractor object. Their task was to click on the object behind which they believed the treasure to actually be hidden. Eye and mouse movements were recorded. Experiment 1 investigated listeners' associations between visual cues and deception, using a variety of static and dynamic cues. Experiment 2 focused on adaptor gestures. We show that a speaker's nonverbal behaviour can have a rapid and direct influence on listeners' pragmatic judgements, supporting the idea that communication is fundamentally multimodal.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

When questioning the veracity of an utterance, we perceive certain non-linguistic behaviours to indicate that a speaker is being deceptive. Recent work has highlighted that listeners' associations between speech disfluency and dishonesty are detectable at the earliest stages of reference comprehension, suggesting that the manner of spoken delivery influences pragmatic judgements concurrently with the processing of lexical information. Here, we investigate the integration of a speaker's gestures into judgements of deception, and ask if and when associations between nonverbal cues and deception emerge. Participants saw and heard a video of a potentially dishonest speaker describe treasure hidden behind an object, while also viewing images of both the named object and a distractor object. Their task was to click on the object behind which they believed the treasure to actually be hidden. Eye and mouse movements were recorded. Experiment 1 investigated listeners' associations between visual cues and deception, using a variety of static and dynamic cues. Experiment 2 focused on adaptor gestures. We show that a speaker's nonverbal behaviour can have a rapid and direct influence on listeners' pragmatic judgements, supporting the idea that communication is fundamentally multimodal.

Close

  • doi:10.1371/journal.pone.0229486

Close

Miguel A. Lago; Craig K. Abbey; Miguel P. Eckstein

Foveated model observers for visual search in 3D medical images Journal Article

In: IEEE Transactions on Medical Imaging, 2020.

Abstract | Links | BibTeX

@article{Lago2020,
title = {Foveated model observers for visual search in 3D medical images},
author = {Miguel A. Lago and Craig K. Abbey and Miguel P. Eckstein},
doi = {10.1109/TMI.2020.3044530},
year = {2020},
date = {2020-01-01},
journal = {IEEE Transactions on Medical Imaging},
abstract = {Model observers have a long history of success in predicting human observer performance in clinically-relevant detection tasks. New 3D image modalities provide more signal information but vastly increase the search space to be scrutinized. Here, we compared standard linear model observers (ideal observers, non-pre-whitening matched filter with eye filter, and various versions of Channelized Hotelling models) to human performance searching in 3D 1/f2.8 filtered noise images and assessed its relationship to the more traditional location known exactly detection tasks and 2D search. We investigated two different signal types that vary in their detectability away from the point of fixation (visual periphery). We show that the influence of 3D search on human performance interacts with the signal’s detectability in the visual periphery. Detection performance for signals difficult to detect in the visual periphery deteriorates greatly in 3D search but not in 3D location known exactly and 2D search. Standard model observers do not predict the interaction between 3D search and signal type. A proposed extension of the Channelized Hotelling model (foveated search model) that processes the image with reduced spatial detail away from the point of fixation, explores the image through eye movements, and scrolls across slices can successfully predict the interaction observed in humans and also the types of errors in 3D search. Together, the findings highlight the need for foveated model observers for image quality evaluation with 3D search.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Model observers have a long history of success in predicting human observer performance in clinically-relevant detection tasks. New 3D image modalities provide more signal information but vastly increase the search space to be scrutinized. Here, we compared standard linear model observers (ideal observers, non-pre-whitening matched filter with eye filter, and various versions of Channelized Hotelling models) to human performance searching in 3D 1/f2.8 filtered noise images and assessed its relationship to the more traditional location known exactly detection tasks and 2D search. We investigated two different signal types that vary in their detectability away from the point of fixation (visual periphery). We show that the influence of 3D search on human performance interacts with the signal&#x2019;s detectability in the visual periphery. Detection performance for signals difficult to detect in the visual periphery deteriorates greatly in 3D search but not in 3D location known exactly and 2D search. Standard model observers do not predict the interaction between 3D search and signal type. A proposed extension of the Channelized Hotelling model (foveated search model) that processes the image with reduced spatial detail away from the point of fixation, explores the image through eye movements, and scrolls across slices can successfully predict the interaction observed in humans and also the types of errors in 3D search. Together, the findings highlight the need for foveated model observers for image quality evaluation with 3D search.

Close

  • doi:10.1109/TMI.2020.3044530

Close

Anthony J. Lambert; Tanvi Sharma; Nathan Ryckman

Accident vulnerability and vision for action: A pilot investigation Journal Article

In: Vision, vol. 4, pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Lambert2020a,
title = {Accident vulnerability and vision for action: A pilot investigation},
author = {Anthony J. Lambert and Tanvi Sharma and Nathan Ryckman},
doi = {10.3390/vision4020026},
year = {2020},
date = {2020-01-01},
journal = {Vision},
volume = {4},
pages = {1--13},
abstract = {Many accidents, such as those involving collisions or trips, appear to involve failures of vision, but the association between accident risk and vision as conventionally assessed is weak or absent. We addressed this conundrum by embracing the distinction inspired by neuroscientific research, between vision for perception and vision for action. A dual-process perspective predicts that accident vulnerability will be associated more strongly with vision for action than vision for perception. In this preliminary investigation, older and younger adults, with relatively high and relatively low self-reported accident vulnerability (Accident Proneness Questionnaire), completed three behavioural assessments targeting vision for perception (Freiburg Visual Acuity Test); vision for action (Vision for Action Test—VAT); and the ability to perform physical actions involving balance, walking and standing (Short Physical Performance Battery). Accident vulnerability was not associated with visual acuity or with performance of physical actions but was associated with VAT performance. VAT assesses the ability to link visual input with a specific action—launching a saccadic eye movement as rapidly as possible, in response to shapes presented in peripheral vision. The predictive relationship between VAT performance and accident vulnerability was independent of age, visual acuity and physical performance scores. Applied implications of these findings are considered.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Many accidents, such as those involving collisions or trips, appear to involve failures of vision, but the association between accident risk and vision as conventionally assessed is weak or absent. We addressed this conundrum by embracing the distinction inspired by neuroscientific research, between vision for perception and vision for action. A dual-process perspective predicts that accident vulnerability will be associated more strongly with vision for action than vision for perception. In this preliminary investigation, older and younger adults, with relatively high and relatively low self-reported accident vulnerability (Accident Proneness Questionnaire), completed three behavioural assessments targeting vision for perception (Freiburg Visual Acuity Test); vision for action (Vision for Action Test—VAT); and the ability to perform physical actions involving balance, walking and standing (Short Physical Performance Battery). Accident vulnerability was not associated with visual acuity or with performance of physical actions but was associated with VAT performance. VAT assesses the ability to link visual input with a specific action—launching a saccadic eye movement as rapidly as possible, in response to shapes presented in peripheral vision. The predictive relationship between VAT performance and accident vulnerability was independent of age, visual acuity and physical performance scores. Applied implications of these findings are considered.

Close

  • doi:10.3390/vision4020026

Close

Fan Li; Chun Hsien Chen; Gangyan Xu; Li Pheng Khoo

Hierarchical eye-tracking data analytics for human fatigue detection at a traffic control center Journal Article

In: IEEE Transactions on Human-Machine Systems, vol. 50, no. 5, pp. 465–474, 2020.

Abstract | Links | BibTeX

@article{Li2020b,
title = {Hierarchical eye-tracking data analytics for human fatigue detection at a traffic control center},
author = {Fan Li and Chun Hsien Chen and Gangyan Xu and Li Pheng Khoo},
doi = {10.1109/THMS.2020.3016088},
year = {2020},
date = {2020-01-01},
journal = {IEEE Transactions on Human-Machine Systems},
volume = {50},
number = {5},
pages = {465--474},
abstract = {Eye-tracking-based human fatigue detection at traffic control centers suffers from an unavoidable problem of low-quality eye-tracking data caused by noisy and missing gaze points. In this article, the authors conducted pioneering work by investigating the effects of data quality on eye-tracking-based fatigue indicators and by proposing a hierarchical-based interpolation approach to extract the eye-tracking-based fatigue indicators from low-quality eye-tracking data. This approach adaptively classified the missing gaze points and hierarchically interpolated them based on the temporal-spatial characteristics of the gaze points. In addition, the definitions of applicable fixations and saccades for human fatigue detection is proposed. Two experiments are conducted to verify the effectiveness and efficiency of the method in extracting eye-tracking-based fatigue indicators and detecting human fatigue. The results indicate that most eye-tracking parameters are significantly affected by the quality of the eye-tracking data. In addition, the proposed approach can achieve much better performance than the classic velocity threshold identification algorithm (I-VT) and a state-of-the-art method (U'n'Eye) in parsing low-quality eye-tracking data. Specifically, the proposed method attained relatively stable eye-tracking-based fatigue indicators and reported the highest accuracy in human fatigue detection. These results are expected to facilitate the application of eye movement-based human fatigue detection in practice.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Eye-tracking-based human fatigue detection at traffic control centers suffers from an unavoidable problem of low-quality eye-tracking data caused by noisy and missing gaze points. In this article, the authors conducted pioneering work by investigating the effects of data quality on eye-tracking-based fatigue indicators and by proposing a hierarchical-based interpolation approach to extract the eye-tracking-based fatigue indicators from low-quality eye-tracking data. This approach adaptively classified the missing gaze points and hierarchically interpolated them based on the temporal-spatial characteristics of the gaze points. In addition, the definitions of applicable fixations and saccades for human fatigue detection is proposed. Two experiments are conducted to verify the effectiveness and efficiency of the method in extracting eye-tracking-based fatigue indicators and detecting human fatigue. The results indicate that most eye-tracking parameters are significantly affected by the quality of the eye-tracking data. In addition, the proposed approach can achieve much better performance than the classic velocity threshold identification algorithm (I-VT) and a state-of-the-art method (U'n'Eye) in parsing low-quality eye-tracking data. Specifically, the proposed method attained relatively stable eye-tracking-based fatigue indicators and reported the highest accuracy in human fatigue detection. These results are expected to facilitate the application of eye movement-based human fatigue detection in practice.

Close

  • doi:10.1109/THMS.2020.3016088

Close

Zhenji Lu; Riender Happee; Joost C. F. Winter

Take over! A video-clip study measuring attention, situation awareness, and decision-making in the face of an impending hazard Journal Article

In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 72, pp. 211–225, 2020.

Abstract | Links | BibTeX

@article{Lu2020,
title = {Take over! A video-clip study measuring attention, situation awareness, and decision-making in the face of an impending hazard},
author = {Zhenji Lu and Riender Happee and Joost C. F. Winter},
doi = {10.1016/j.trf.2020.05.013},
year = {2020},
date = {2020-01-01},
journal = {Transportation Research Part F: Traffic Psychology and Behaviour},
volume = {72},
pages = {211--225},
publisher = {The Author(s)},
abstract = {In highly automated driving, drivers occasionally need to take over control of the car due to limitations of the automated driving system. Research has shown that visually distracted drivers need about 7 s to regain situation awareness (SA). However, it is unknown whether the presence of a hazard affects SA. In the present experiment, 32 participants watched animated video clips from a driver's perspective while their eyes were recorded using eye-tracking equipment. The videos had lengths between 1 and 20 s and contained either no hazard or an impending crash in the form of a stationary car in the ego lane. After each video, participants had to (1) decide (no need to take over, evade left, evade right, brake only), (2) rate the danger of the situation, (3) rebuild the situation from a top-down perspective, and (4) rate the difficulty of the rebuilding task. The results showed that the hazard situations were experienced as more dangerous than the non-hazard situations, as inferred from self-reported danger and pupil diameter. However, there were no major differences in SA: hazard and non-hazard situations yielded equivalent speed and distance errors in the rebuilding task and equivalent self-reported difficulty scores. An exception occurred for the shortest time budget (1 s) videos, where participants showed impaired SA in the hazard condition, presumably because the threat inhibited participants from looking into the rear-view mirror. Correlations between measures of SA and decision-making accuracy were low to moderate. It is concluded that hazards do not substantially affect the global awareness of the traffic situation, except for short time budgets.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

In highly automated driving, drivers occasionally need to take over control of the car due to limitations of the automated driving system. Research has shown that visually distracted drivers need about 7 s to regain situation awareness (SA). However, it is unknown whether the presence of a hazard affects SA. In the present experiment, 32 participants watched animated video clips from a driver's perspective while their eyes were recorded using eye-tracking equipment. The videos had lengths between 1 and 20 s and contained either no hazard or an impending crash in the form of a stationary car in the ego lane. After each video, participants had to (1) decide (no need to take over, evade left, evade right, brake only), (2) rate the danger of the situation, (3) rebuild the situation from a top-down perspective, and (4) rate the difficulty of the rebuilding task. The results showed that the hazard situations were experienced as more dangerous than the non-hazard situations, as inferred from self-reported danger and pupil diameter. However, there were no major differences in SA: hazard and non-hazard situations yielded equivalent speed and distance errors in the rebuilding task and equivalent self-reported difficulty scores. An exception occurred for the shortest time budget (1 s) videos, where participants showed impaired SA in the hazard condition, presumably because the threat inhibited participants from looking into the rear-view mirror. Correlations between measures of SA and decision-making accuracy were low to moderate. It is concluded that hazards do not substantially affect the global awareness of the traffic situation, except for short time budgets.

Close

  • doi:10.1016/j.trf.2020.05.013

Close

Xueer Ma; Xiangling Zhuang; Guojie Ma

Transparent windows on food packaging do not always capture attention and increase purchase intention Journal Article

In: Frontiers in Psychology, vol. 11, pp. 593690, 2020.

Abstract | Links | BibTeX

@article{Ma2020a,
title = {Transparent windows on food packaging do not always capture attention and increase purchase intention},
author = {Xueer Ma and Xiangling Zhuang and Guojie Ma},
doi = {10.3389/fpsyg.2020.593690},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Psychology},
volume = {11},
pages = {593690},
abstract = {Transparent windows on food packaging can effectively highlight the actual food inside. The present study examined whether food packaging with transparent windows (relative to packaging with food‐ and non-food graphic windows in the same position and of the same size) has more advantages in capturing consumer attention and determining consumers' willingness to purchase. In this study, college students were asked to evaluate prepackaged foods presented on a computer screen, and their eye movements were recorded. The results showed salience effects for both packaging with transparent and food-graphic windows, which were also regulated by food category. Both transparent and graphic packaging gained more viewing time than the non-food graphic baseline condition for all the three selected products (i.e., nuts, preserved fruits, and instant cereals). However, no significant difference was found between transparent and graphic window conditions. For preserved fruits, time to first fixations was shorter in transparent packaging than other conditions. For nuts, the willingness to purchase was higher in both transparent and graphic conditions than the baseline condition, while the packaging attractiveness played a key role in mediating consumers' willingness to purchase. The implications for stakeholders and future research directions are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Transparent windows on food packaging can effectively highlight the actual food inside. The present study examined whether food packaging with transparent windows (relative to packaging with food‐ and non-food graphic windows in the same position and of the same size) has more advantages in capturing consumer attention and determining consumers' willingness to purchase. In this study, college students were asked to evaluate prepackaged foods presented on a computer screen, and their eye movements were recorded. The results showed salience effects for both packaging with transparent and food-graphic windows, which were also regulated by food category. Both transparent and graphic packaging gained more viewing time than the non-food graphic baseline condition for all the three selected products (i.e., nuts, preserved fruits, and instant cereals). However, no significant difference was found between transparent and graphic window conditions. For preserved fruits, time to first fixations was shorter in transparent packaging than other conditions. For nuts, the willingness to purchase was higher in both transparent and graphic conditions than the baseline condition, while the packaging attractiveness played a key role in mediating consumers' willingness to purchase. The implications for stakeholders and future research directions are discussed.

Close

  • doi:10.3389/fpsyg.2020.593690

Close

Nadine Matton; Pierre Vincent Paubel; Sébastien Puma

Toward the use of pupillary responses for pilot selection Journal Article

In: Human Factors, pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Matton2020,
title = {Toward the use of pupillary responses for pilot selection},
author = {Nadine Matton and Pierre Vincent Paubel and Sébastien Puma},
doi = {10.1177/0018720820945163},
year = {2020},
date = {2020-01-01},
journal = {Human Factors},
pages = {1--13},
abstract = {Objective: For selection practitioners, it seems important to assess the level of mental resources invested in order to perform a demanding task. In this study, we investigated the potential of pupil size measurement to discriminate the most proficient pilot students from the less proficient. Background: Cognitive workload is known to influence learning outcome. More specifically, cognitive difficulties observed during pilot training are often related to a lack of efficient mental workload management. Method: Twenty pilot students performed a laboratory multitasking scenario, composed of several stages with increasing workload, while their pupil size was recorded. Two levels of pilot students were compared according to the outcome after 2 years of training: high success and medium success. Results: Our findings suggested that task-evoked pupil size measurements could be a promising predictor of flight training difficulties during the 2-year training. Indeed, high-level pilot students showed greater pupil size changes from low-load to high-load stages of the multitasking scenario than medium-level pilot students. Moreover, average pupil diameters at the low-load stage were smallest for the high-level pilot students. Conclusion: Following the neural efficiency hypothesis framework, the most proficient pilot students supposedly used their mental resources more efficiently than the least proficient while performing the multitasking scenario. Application: These findings might introduce a new way of managing selection processes complemented with ocular measurements. More specifically, pupil size measurement could enable identification of applicants with greater chances of success during pilot training.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Objective: For selection practitioners, it seems important to assess the level of mental resources invested in order to perform a demanding task. In this study, we investigated the potential of pupil size measurement to discriminate the most proficient pilot students from the less proficient. Background: Cognitive workload is known to influence learning outcome. More specifically, cognitive difficulties observed during pilot training are often related to a lack of efficient mental workload management. Method: Twenty pilot students performed a laboratory multitasking scenario, composed of several stages with increasing workload, while their pupil size was recorded. Two levels of pilot students were compared according to the outcome after 2 years of training: high success and medium success. Results: Our findings suggested that task-evoked pupil size measurements could be a promising predictor of flight training difficulties during the 2-year training. Indeed, high-level pilot students showed greater pupil size changes from low-load to high-load stages of the multitasking scenario than medium-level pilot students. Moreover, average pupil diameters at the low-load stage were smallest for the high-level pilot students. Conclusion: Following the neural efficiency hypothesis framework, the most proficient pilot students supposedly used their mental resources more efficiently than the least proficient while performing the multitasking scenario. Application: These findings might introduce a new way of managing selection processes complemented with ocular measurements. More specifically, pupil size measurement could enable identification of applicants with greater chances of success during pilot training.

Close

  • doi:10.1177/0018720820945163

Close

Anna Miscenà; Jozsef Arato; Raphael Rosenberg

Absorbing the gaze, scattering looks: Klimt's distinctive style and its two-fold effect on the eye of the beholder Journal Article

In: Journal of Eye Movement Research, vol. 13, no. 2, pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Miscena2020,
title = {Absorbing the gaze, scattering looks: Klimt's distinctive style and its two-fold effect on the eye of the beholder},
author = {Anna Miscenà and Jozsef Arato and Raphael Rosenberg},
doi = {https://doi.org/10.16910/jemr.13.2.8},
year = {2020},
date = {2020-01-01},
journal = {Journal of Eye Movement Research},
volume = {13},
number = {2},
pages = {1--13},
abstract = {Among the most renowned painters of the early twentieth century, Gustav Klimt is often associated – by experts and laymen alike - with a distinctive style of representation: the visual juxtaposition of realistic features and flattened ornamental patterns. Art historical writing suggests that this juxtaposition allows a two-fold experience; the perception of both the realm of art and the realm of life. While Klimt adopted a variety of stylistic choices in his career, this one popularised his work and was hardly ever used by other artists. The following study was designed to observe whether Klimt's distinctive style causes a specific behaviour of the viewer, at the level of eye-movements. Twenty-one portraits were shown to thirty viewers while their eye-movements were recorded. The pictures included artworks by Klimt in both his distinctive and non-distinctive styles, as well as other artists of the same historical period. The recorded data show that only Klimt's distinctive paintings induce a specific eye- movement pattern with alternating longer (“absorbed”) and shorter (“scattered”) fixations. We therefore claim that there is a behavioural correspondence to what art historical interpretations have so far asserted: The perception of “Klimt's style” can be described as two-fold also at a physiological level.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Among the most renowned painters of the early twentieth century, Gustav Klimt is often associated – by experts and laymen alike - with a distinctive style of representation: the visual juxtaposition of realistic features and flattened ornamental patterns. Art historical writing suggests that this juxtaposition allows a two-fold experience; the perception of both the realm of art and the realm of life. While Klimt adopted a variety of stylistic choices in his career, this one popularised his work and was hardly ever used by other artists. The following study was designed to observe whether Klimt's distinctive style causes a specific behaviour of the viewer, at the level of eye-movements. Twenty-one portraits were shown to thirty viewers while their eye-movements were recorded. The pictures included artworks by Klimt in both his distinctive and non-distinctive styles, as well as other artists of the same historical period. The recorded data show that only Klimt's distinctive paintings induce a specific eye- movement pattern with alternating longer (“absorbed”) and shorter (“scattered”) fixations. We therefore claim that there is a behavioural correspondence to what art historical interpretations have so far asserted: The perception of “Klimt's style” can be described as two-fold also at a physiological level.

Close

  • doi:https://doi.org/10.16910/jemr.13.2.8

Close

Malik M. Naeem Mannan; M. Ahmad Kamran; Shinil Kang; Hak Soo Choi; Myung Yung Jeong

A hybrid speller design using eye tracking and SSVEP brain–computer interface Journal Article

In: Sensors, vol. 20, no. 3, pp. 1–20, 2020.

Abstract | Links | BibTeX

@article{NaeemMannan2020,
title = {A hybrid speller design using eye tracking and SSVEP brain–computer interface},
author = {Malik M. Naeem Mannan and M. Ahmad Kamran and Shinil Kang and Hak Soo Choi and Myung Yung Jeong},
doi = {10.3390/s20030891},
year = {2020},
date = {2020-01-01},
journal = {Sensors},
volume = {20},
number = {3},
pages = {1--20},
abstract = {Steady‐state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain–computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli‐responsive hybrid speller by using electroencephalography (EEG) and video‐based eye‐tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)‐based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI‐speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI‐spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued‐spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free‐spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI‐based system will ultimately enable a truly high-speed communication channel.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Steady‐state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain–computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli‐responsive hybrid speller by using electroencephalography (EEG) and video‐based eye‐tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)‐based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI‐speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI‐spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued‐spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free‐spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI‐based system will ultimately enable a truly high-speed communication channel.

Close

  • doi:10.3390/s20030891

Close

Diederick C. Niehorster; Thiago Santini; Roy S. Hessels; Ignace T. C. Hooge; Enkelejda Kasneci; Marcus Nyström

The impact of slippage on the data quality of head-worn eye trackers Journal Article

In: Behavior Research Methods, vol. 52, no. 3, pp. 1140–1160, 2020.

Abstract | Links | BibTeX

@article{Niehorster2020,
title = {The impact of slippage on the data quality of head-worn eye trackers},
author = {Diederick C. Niehorster and Thiago Santini and Roy S. Hessels and Ignace T. C. Hooge and Enkelejda Kasneci and Marcus Nyström},
doi = {10.3758/s13428-019-01307-0},
year = {2020},
date = {2020-01-01},
journal = {Behavior Research Methods},
volume = {52},
number = {3},
pages = {1140--1160},
abstract = {Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant's head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs' Pupil in 3D mode, and (iv) Pupil-Labs' Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant's head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs' Pupil in 3D mode, and (iv) Pupil-Labs' Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.

Close

  • doi:10.3758/s13428-019-01307-0

Close

Paul Henri Prévot; Kevin Gehere; Fabrice Arcizet; Himanshu Akolkar; Mina A. Khoei; Kévin Blaize; Omar Oubari; Pierre Daye; Marion Lanoë; Manon Valet; Sami Dalouz; Paul Langlois; Elric Esposito; Valérie Forster; Elisabeth Dubus; Nicolas Wattiez; Elena Brazhnikova; Céline Nouvel-Jaillard; Yannick LeMer; Joanna Demilly; Claire Maëlle Fovet; Philippe Hantraye; Morgane Weissenburger; Henri Lorach; Elodie Bouillet; Martin Deterre; Ralf Hornig; Guillaume Buc; José Alain Sahel; Guillaume Chenegros; Pierre Pouget; Ryad Benosman; Serge Picaud

Behavioural responses to a photovoltaic subretinal prosthesis implanted in non-human primates Journal Article

In: Nature Biomedical Engineering, vol. 4, no. 2, pp. 172–180, 2020.

Abstract | Links | BibTeX

@article{Prevot2020,
title = {Behavioural responses to a photovoltaic subretinal prosthesis implanted in non-human primates},
author = {Paul Henri Prévot and Kevin Gehere and Fabrice Arcizet and Himanshu Akolkar and Mina A. Khoei and Kévin Blaize and Omar Oubari and Pierre Daye and Marion Lanoë and Manon Valet and Sami Dalouz and Paul Langlois and Elric Esposito and Valérie Forster and Elisabeth Dubus and Nicolas Wattiez and Elena Brazhnikova and Céline Nouvel-Jaillard and Yannick LeMer and Joanna Demilly and Claire Maëlle Fovet and Philippe Hantraye and Morgane Weissenburger and Henri Lorach and Elodie Bouillet and Martin Deterre and Ralf Hornig and Guillaume Buc and José Alain Sahel and Guillaume Chenegros and Pierre Pouget and Ryad Benosman and Serge Picaud},
doi = {10.1038/s41551-019-0484-2},
year = {2020},
date = {2020-01-01},
journal = {Nature Biomedical Engineering},
volume = {4},
number = {2},
pages = {172--180},
abstract = {Retinal dystrophies and age-related macular degeneration related to photoreceptor degeneration can cause blindness. In blind patients, although the electrical activation of the residual retinal circuit can provide useful artificial visual perception, the resolutions of current retinal prostheses have been limited either by large electrodes or small numbers of pixels. Here we report the evaluation, in three awake non-human primates, of a previously reported near-infrared-light-sensitive photovoltaic subretinal prosthesis. We show that multipixel stimulation of the prosthesis within radiation safety limits enabled eye tracking in the animals, that they responded to stimulations directed at the implant with repeated saccades and that the implant-induced responses were present two years after device implantation. Our findings pave the way for the clinical evaluation of the prosthesis in patients affected by dry atrophic age-related macular degeneration.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Retinal dystrophies and age-related macular degeneration related to photoreceptor degeneration can cause blindness. In blind patients, although the electrical activation of the residual retinal circuit can provide useful artificial visual perception, the resolutions of current retinal prostheses have been limited either by large electrodes or small numbers of pixels. Here we report the evaluation, in three awake non-human primates, of a previously reported near-infrared-light-sensitive photovoltaic subretinal prosthesis. We show that multipixel stimulation of the prosthesis within radiation safety limits enabled eye tracking in the animals, that they responded to stimulations directed at the implant with repeated saccades and that the implant-induced responses were present two years after device implantation. Our findings pave the way for the clinical evaluation of the prosthesis in patients affected by dry atrophic age-related macular degeneration.

Close

  • doi:10.1038/s41551-019-0484-2

Close

David Randall; Sophie Lauren Fox; John Wesley Fenner; Gemma Elizabeth Arblaster; Anne Bjerre; Helen Jane Griffiths

Using VR to investigate the relationship between visual acuity and severity of simulated oscillopsia Journal Article

In: Current Eye Research, vol. 45, no. 12, pp. 1611–1618, 2020.

Abstract | Links | BibTeX

@article{Randall2020,
title = {Using VR to investigate the relationship between visual acuity and severity of simulated oscillopsia},
author = {David Randall and Sophie Lauren Fox and John Wesley Fenner and Gemma Elizabeth Arblaster and Anne Bjerre and Helen Jane Griffiths},
doi = {10.1080/02713683.2020.1772834},
year = {2020},
date = {2020-01-01},
journal = {Current Eye Research},
volume = {45},
number = {12},
pages = {1611--1618},
publisher = {Taylor & Francis},
abstract = {Purpose: Oscillopsia is a debilitating symptom resulting from involuntary eye movement most commonly associated with acquired nystagmus. Investigating and documenting the effects of oscillopsia severity on visual acuity (VA) is challenging. This paper aims to further understanding of the effects of oscillopsia using a virtual reality simulation. Methods: Fifteen right-beat horizontal nystagmus waveforms, with different amplitude (1°, 3°, 5°, 8° and 11°) and frequency (1.25 Hz, 2.5 Hz and 5 Hz) combinations, were produced and imported into virtual reality to simulate different severities of oscillopsia. Fifty participants without ocular pathology were recruited to read logMAR charts in virtual reality under stationary conditions (no oscillopsia) and subsequently while experiencing simulated oscillopsia. The change in VA (logMAR) was calculated for each oscillopsia simulation (logMAR VA with oscillopsia–logMAR VA with no oscillopsia), removing the influence of different baseline VAs between participants. A one-tailed paired t-test was used to assess statistical significance in the worsening in VA caused by the oscillopsia simulations. Results: VA worsened with each incremental increase in simulated oscillopsia intensity (frequency x amplitude), either by increasing frequency or amplitude, with the exception of statistically insignificant changes at lower intensity simulations. Theoretical understanding predicted a linear relationship between increasing oscillopsia intensity and worsening VA. This was supported by observations at lower intensity simulations but not at higher intensities, with incremental changes in VA gradually levelling off. A potential reason for the difference at higher intensities is the influence of frame rate when using digital simulations in virtual reality. Conclusions: The frequency and amplitude were found to equally affect VA, as predicted. These results not only consolidate the assumption that VA degrades with oscillopsia but also provide quantitative information that relates these changes to amplitude and frequency of oscillopsia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Purpose: Oscillopsia is a debilitating symptom resulting from involuntary eye movement most commonly associated with acquired nystagmus. Investigating and documenting the effects of oscillopsia severity on visual acuity (VA) is challenging. This paper aims to further understanding of the effects of oscillopsia using a virtual reality simulation. Methods: Fifteen right-beat horizontal nystagmus waveforms, with different amplitude (1°, 3°, 5°, 8° and 11°) and frequency (1.25 Hz, 2.5 Hz and 5 Hz) combinations, were produced and imported into virtual reality to simulate different severities of oscillopsia. Fifty participants without ocular pathology were recruited to read logMAR charts in virtual reality under stationary conditions (no oscillopsia) and subsequently while experiencing simulated oscillopsia. The change in VA (logMAR) was calculated for each oscillopsia simulation (logMAR VA with oscillopsia–logMAR VA with no oscillopsia), removing the influence of different baseline VAs between participants. A one-tailed paired t-test was used to assess statistical significance in the worsening in VA caused by the oscillopsia simulations. Results: VA worsened with each incremental increase in simulated oscillopsia intensity (frequency x amplitude), either by increasing frequency or amplitude, with the exception of statistically insignificant changes at lower intensity simulations. Theoretical understanding predicted a linear relationship between increasing oscillopsia intensity and worsening VA. This was supported by observations at lower intensity simulations but not at higher intensities, with incremental changes in VA gradually levelling off. A potential reason for the difference at higher intensities is the influence of frame rate when using digital simulations in virtual reality. Conclusions: The frequency and amplitude were found to equally affect VA, as predicted. These results not only consolidate the assumption that VA degrades with oscillopsia but also provide quantitative information that relates these changes to amplitude and frequency of oscillopsia.

Close

  • doi:10.1080/02713683.2020.1772834

Close

Deirdre A. Robertson; Peter D. Lunn

The effect of spatial location of calorie information on choice, consumption and eye movements Journal Article

In: Appetite, vol. 144, pp. 1–10, 2020.

Abstract | Links | BibTeX

@article{Robertson2020,
title = {The effect of spatial location of calorie information on choice, consumption and eye movements},
author = {Deirdre A. Robertson and Peter D. Lunn},
doi = {10.1016/j.appet.2019.104446},
year = {2020},
date = {2020-01-01},
journal = {Appetite},
volume = {144},
pages = {1--10},
abstract = {We manipulated the presence and spatial location of calorie labels on menus while tracking eye movements. A novel “lab-in-the-field” experimental design allowed eye movements to be recorded while participants chose lunch from a menu, unaware that their choice was part of a study. Participants exposed to calorie information ordered 93 fewer calories (11%) relative to a control group who saw no calorie labels. The difference in number of calories consumed was greater still. The impact was strongest when calorie information was displayed just to the right of the price, in an equivalent font. The effects were mediated by knowledge of the amount of calories in the meal, implying that calorie posting led to more informed decision-making. There was no impact on enjoyment of the meal. The eye-tracking data suggested that the spatial arrangement altered individuals' search strategies while viewing the menu. This research suggests that the spatial location of calories on menus may be an important consideration when designing calorie posting legislation and policy. 1.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

We manipulated the presence and spatial location of calorie labels on menus while tracking eye movements. A novel “lab-in-the-field” experimental design allowed eye movements to be recorded while participants chose lunch from a menu, unaware that their choice was part of a study. Participants exposed to calorie information ordered 93 fewer calories (11%) relative to a control group who saw no calorie labels. The difference in number of calories consumed was greater still. The impact was strongest when calorie information was displayed just to the right of the price, in an equivalent font. The effects were mediated by knowledge of the amount of calories in the meal, implying that calorie posting led to more informed decision-making. There was no impact on enjoyment of the meal. The eye-tracking data suggested that the spatial arrangement altered individuals' search strategies while viewing the menu. This research suggests that the spatial location of calories on menus may be an important consideration when designing calorie posting legislation and policy. 1.

Close

  • doi:10.1016/j.appet.2019.104446

Close

Qëndresa Rramani; Ian Krajbich; Laura Enax; Lisa Brustkern; Bernd Weber

Salient nutrition labels shift peoples' attention to healthy foods and exert more influence on their choices Journal Article

In: Nutrition Research, vol. 80, pp. 106–116, 2020.

Abstract | Links | BibTeX

@article{Rramani2020,
title = {Salient nutrition labels shift peoples' attention to healthy foods and exert more influence on their choices},
author = {Qëndresa Rramani and Ian Krajbich and Laura Enax and Lisa Brustkern and Bernd Weber},
doi = {10.1016/j.nutres.2020.06.013},
year = {2020},
date = {2020-01-01},
journal = {Nutrition Research},
volume = {80},
pages = {106--116},
publisher = {Elsevier Inc.},
abstract = {Nutrition labels are the most commonly used tools to promote healthy choices. Research has shown that color-coded traffic light (TL) labels are more effective than purely numerical Guideline Daily Amount (GDA) labels at promoting healthy eating. While these effects of TL labels on food choice are hypothesized to rely on attention, how this occurs remains unknown. Based on previous eye-tracking research we hypothesized that TL labels compared to GDA labels will attract more attention, will induce shifts in attention allocation to healthy food items, and will increase the influence of attention to the labels on food choice. To test our hypotheses, we conducted an eye-tracking experiment where participants chose between healthy and unhealthy food items accompanied either by TL or GDA labels. We found that TL labels biased choices towards healthier items because their presence caused participants to allocate more attention to healthy items and less to unhealthy items. Moreover, our data indicated that TL labels were more likely to be looked at, and had a larger effect on choice, despite attracting less dwell time. These results reveal that TL labels increase healthy food choice, relative to GDA labels, by shifting attention and the effects of attention on choice.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Nutrition labels are the most commonly used tools to promote healthy choices. Research has shown that color-coded traffic light (TL) labels are more effective than purely numerical Guideline Daily Amount (GDA) labels at promoting healthy eating. While these effects of TL labels on food choice are hypothesized to rely on attention, how this occurs remains unknown. Based on previous eye-tracking research we hypothesized that TL labels compared to GDA labels will attract more attention, will induce shifts in attention allocation to healthy food items, and will increase the influence of attention to the labels on food choice. To test our hypotheses, we conducted an eye-tracking experiment where participants chose between healthy and unhealthy food items accompanied either by TL or GDA labels. We found that TL labels biased choices towards healthier items because their presence caused participants to allocate more attention to healthy items and less to unhealthy items. Moreover, our data indicated that TL labels were more likely to be looked at, and had a larger effect on choice, despite attracting less dwell time. These results reveal that TL labels increase healthy food choice, relative to GDA labels, by shifting attention and the effects of attention on choice.

Close

  • doi:10.1016/j.nutres.2020.06.013

Close

Donghyun Ryu; Andrew Cooke; Eduardo Bellomo; Tim Woodman

Watch out for the hazard! Blurring peripheral vision facilitates hazard perception in driving Journal Article

In: Accident Analysis and Prevention, vol. 146, pp. 1–13, 2020.

Abstract | Links | BibTeX

@article{Ryu2020,
title = {Watch out for the hazard! Blurring peripheral vision facilitates hazard perception in driving},
author = {Donghyun Ryu and Andrew Cooke and Eduardo Bellomo and Tim Woodman},
doi = {10.1016/j.aap.2020.105755},
year = {2020},
date = {2020-01-01},
journal = {Accident Analysis and Prevention},
volume = {146},
pages = {1--13},
abstract = {The objectives of this paper were to directly examine the roles of central and peripheral vision in hazard perception and to test whether perceptual training can enhance hazard perception. We also examined putative cortical mechanisms underpinning any effect of perceptual training on performance. To address these objectives, we used the gaze-contingent display paradigm to selectively present information to central and peripheral parts of the visual field. In Experiment 1, we compared hazard perception abilities of experienced and inexperienced drivers while watching video clips in three different viewing conditions (full vision; clear central and blurred peripheral vision; blurred central and clear peripheral vision). Participants' visual search behaviour and cortical activity were simultaneously recorded. In Experiment 2, we determined whether training with clear central and blurred peripheral vision could improve hazard perception among non-licensed drivers. Results demonstrated that (i) information from central vision is more important than information from peripheral vision in identifying hazard situations, for screen-based hazard perception tests, (ii) clear central and blurred peripheral vision viewing helps the alignment of line-of-gaze and attention, (iii) training with clear central and blurred peripheral vision can improve screen-based hazard perception. The findings have important implications for road safety and provide a new training paradigm to improve hazard perception.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

The objectives of this paper were to directly examine the roles of central and peripheral vision in hazard perception and to test whether perceptual training can enhance hazard perception. We also examined putative cortical mechanisms underpinning any effect of perceptual training on performance. To address these objectives, we used the gaze-contingent display paradigm to selectively present information to central and peripheral parts of the visual field. In Experiment 1, we compared hazard perception abilities of experienced and inexperienced drivers while watching video clips in three different viewing conditions (full vision; clear central and blurred peripheral vision; blurred central and clear peripheral vision). Participants' visual search behaviour and cortical activity were simultaneously recorded. In Experiment 2, we determined whether training with clear central and blurred peripheral vision could improve hazard perception among non-licensed drivers. Results demonstrated that (i) information from central vision is more important than information from peripheral vision in identifying hazard situations, for screen-based hazard perception tests, (ii) clear central and blurred peripheral vision viewing helps the alignment of line-of-gaze and attention, (iii) training with clear central and blurred peripheral vision can improve screen-based hazard perception. The findings have important implications for road safety and provide a new training paradigm to improve hazard perception.

Close

  • doi:10.1016/j.aap.2020.105755

Close

Steven W. Savage; Douglas D. Potter; Benjamin W. Tatler

The effects of cognitive distraction on behavioural, oculomotor and electrophysiological metrics during a driving hazard perception task Journal Article

In: Accident Analysis and Prevention, vol. 138, pp. 1–11, 2020.

Abstract | Links | BibTeX

@article{Savage2020,
title = {The effects of cognitive distraction on behavioural, oculomotor and electrophysiological metrics during a driving hazard perception task},
author = {Steven W. Savage and Douglas D. Potter and Benjamin W. Tatler},
doi = {10.1016/j.aap.2020.105469},
year = {2020},
date = {2020-01-01},
journal = {Accident Analysis and Prevention},
volume = {138},
pages = {1--11},
publisher = {Elsevier},
abstract = {Previous research has demonstrated that the distraction caused by holding a mobile telephone conversation is not limited to the period of the actual conversation (Haigney, 1995; Redelmeier & Tibshirani, 1997; Savage et al., 2013). In a prior study we identified potential eye movement and EEG markers of cognitive distraction during driving hazard perception. However the extent to which these markers are affected by the demands of the hazard perception task are unclear. Therefore in the current study we assessed the effects of secondary cognitive task demand on eye movement and EEG metrics separately for periods prior to, during and after the hazard was visible. We found that when no hazard was present (prior and post hazard windows), distraction resulted in changes to various elements of saccadic eye movements. However, when the target was present, distraction did not affect eye movements. We have previously found evidence that distraction resulted in an overall decrease in theta band output at occipital sites of the brain. This was interpreted as evidence that distraction results in a reduction in visual processing. The current study confirmed this by examining the effects of distraction on the lambda response component of subjects eye fixation related potentials (EFRPs). Furthermore, we demonstrated that although detections of hazards were not affected by distraction, both eye movement and EEG metrics prior to the onset of the hazard were sensitive to changes in cognitive workload. This suggests that changes to specific aspects of the saccadic eye movement system could act as unobtrusive markers of distraction even prior to a breakdown in driving performance.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

Previous research has demonstrated that the distraction caused by holding a mobile telephone conversation is not limited to the period of the actual conversation (Haigney, 1995; Redelmeier & Tibshirani, 1997; Savage et al., 2013). In a prior study we identified potential eye movement and EEG markers of cognitive distraction during driving hazard perception. However the extent to which these markers are affected by the demands of the hazard perception task are unclear. Therefore in the current study we assessed the effects of secondary cognitive task demand on eye movement and EEG metrics separately for periods prior to, during and after the hazard was visible. We found that when no hazard was present (prior and post hazard windows), distraction resulted in changes to various elements of saccadic eye movements. However, when the target was present, distraction did not affect eye movements. We have previously found evidence that distraction resulted in an overall decrease in theta band output at occipital sites of the brain. This was interpreted as evidence that distraction results in a reduction in visual processing. The current study confirmed this by examining the effects of distraction on the lambda response component of subjects eye fixation related potentials (EFRPs). Furthermore, we demonstrated that although detections of hazards were not affected by distraction, both eye movement and EEG metrics prior to the onset of the hazard were sensitive to changes in cognitive workload. This suggests that changes to specific aspects of the saccadic eye movement system could act as unobtrusive markers of distraction even prior to a breakdown in driving performance.

Close

  • doi:10.1016/j.aap.2020.105469

Close

387 entries « ‹ 1 of 4 › »

Let’s Keep in Touch

  • Twitter
  • Facebook
  • Instagram
  • LinkedIn
  • YouTube
Newsletter
Newsletter Archive
Conferences

Contact

info@sr-research.com

Phone: +1-613-271-8686

Toll Free: +1-866-821-0731

Fax: +1-613-482-4866

Quick Links

Products

Solutions

Support Forum

Legal

Legal Notice

Privacy Policy | Accessibility Policy

EyeLink® eye trackers are intended for research purposes only and should not be used in the treatment or diagnosis of any medical condition.

Featured Blog

Reading Profiles of Adults with Dyslexia

Reading Profile of Adults with Dyslexia


Copyright © 2023 · SR Research Ltd. All Rights Reserved. EyeLink is a registered trademark of SR Research Ltd.