CASE STUDY: Relationship Between Language Experience and the Time Course of Spoken Word Recognition

Understanding how humans recognize spoken words is a complex and fascinating area of psycholinguistic research. In their recent paper “The relationship between language experience variables and the time course of spoken word recognition,” McDonald and Zamuner (2025) utilized eye-tracking technology to shed light on how an individual’s lifetime language exposure influences the process of spoken word recognition.
The core research question revolves around the interplay between language experience and the efficiency of spoken word recognition. Previous research often focused on language proficiency or age of acquisition as key predictors of second language (L2) processing. However, the authors suggest that lifetime exposure to a language might be a more sensitive and comprehensive predictor, as it captures a broader range of variability in language experience for both native and early second language learners. The study aimed to move beyond the traditional dichotomy of first versus second language learners, placing participants on a continuum of French language experience based on the percentage of their lifetime language input in French.
Eye Tracking and the Visual World Paradigm
The central methodology employed in this study was the visual world paradigm (VWP), in which participants view a visual display containing several images while hearing an auditory stimulus. By tracking gaze during the task, researchers are able to observe real-time cognitive processes during language comprehension.
McDonald and Zamuner (2025) leveraged the VWP to precisely measure the time course of spoken word recognition. When a listener hears a word like “dog,” not only is “dog” activated, but also phonologically similar words (e.g., “dot”) and semantically related words (e.g., “bear”) become transiently active in the mental lexicon. By tracking eye movements, the researchers could observe when participants began to fixate on the target image, as well as on phonological and semantic competitor images. For instance, increased fixations on an image of a “dot” when hearing “dog” would indicate phonological coactivation. Similarly, fixations on a “bear” when hearing “dog” would signal semantic coactivation.
The high sampling rate and exceptional accuracy of the SR Research EyeLink 1000 allowed McDonald and Zamuner to analyze not just whether activation occurred, but when it occurred and how strongly. They could measure the speed and strength of target word activation by looking at the proportion of fixations and the timing of peak fixations. For competitor words, eye-tracking provided insights into the peak of coactivation and, crucially, the resolution of competition—how well competing words were suppressed after the target was identified. The study’s use of generalized additive mixed models (GAMMs) further enhanced the analysis of the continuous eye-tracking data, allowing for the modeling of nonlinear interactions between continuous variables like time and lifetime French exposure.
Eye Tracking Shows Time Course of Word Recognition is Shaped by Second Language Experience
The findings demonstrated the efficacy of eye-tracking in revealing subtle processing differences. For example, participants with more lifetime French exposure looked to the target image faster and with a higher peak. Although vocabulary size was ultimately found to be a more frequent predictor in exploratory analyses, the eye-tracking data provided granular details about the time course of lexical activation and competition, offering a dynamic view of how language experience shapes spoken word recognition.
The research highlights how eye-tracking serves as an invaluable method for capturing the intricate and time-sensitive processes involved in language comprehension.
For information regarding how eye tracking can help your research, check out our solutions and product pages or contact us. We are happy to help!