03-31-2022, 07:08 AM
Experiment Builder allows for audio recording during trials using a RECORD_SOUND node. However there are some hardware, software, and data analysis considerations to properly implement precision audio playback and analyze the gaze data relative to the audio signal. The guide below will outline the basics of this type of design.
Hardware & Software Setup
Your setup requirements depend on your operating system.
Basic Audio Recording
The RECORD_SOUND node controls the audio recording for each trial. There are two critical properties to the RECORD_SOUND node:
Synchronizing Audio with Eye-Tracking Data
Accurately marking the exact start time of an audio recording in your eye-tracking data file requires a special technique.
The Timing Challenge
Normally, Experiment Builder can send a message to the EyeLink data file to mark when an action occurs. However, there is a small, variable delay between when the RECORD_SOUND node executes and when the audio hardware actually begins recording. A standard message would mark the command time, not the true start time, leading to synchronization errors.
To solve this, we can calculate the precise delay and send it as an "offset" along with our message. When you open the data in Data Viewer, it reads this offset and automatically shifts the message backward in time, placing it at the exact moment the audio recording began.
The message sent to the data file must follow this format, where [Offset] is the calculated delay in milliseconds:
For example, Data Viewer will place the following message at time 14108770 (which is 14108943 - 173):
Implementation: Creating the Sync Message
You create and send this message using a SEND_EL_MESSAGE node.
Add a SEND_EL_MESSAGE node to your sequence. This node must be placed before the node that stops the audio recording.
In the "Value" property of this node, enter the following expression. This calculates the offset and formats the message correctly.
Breaking down the equation above:
Data Analysis
At the analysis stage, the RECORDING_STARTED message you created is the key to perfectly aligning your audio and eye-tracking data. In Data Viewer, you can use this message in two different ways.
For more information please take a look at the "Synchronizing eye movements and audio recorded by Experiment Builder" video tutorial (see YouTube and direct download links below:
If your experiment recorded speech, you can add precise SPEECH_ONSET messages to your Data Viewer file. This allows you to analyze eye movements relative to the moment a participant began speaking. Here are two methods to do this.
Setting up a voice key trigger to identify the start time of voice responses
This trigger fires as soon as the microphone signal crosses a pre-defined sound threshold. When it fires, it automatically writes a VOICE_KEY message into the EyeLink data file, marking the event time. To use it, simply place a VOICE_KEY trigger node in your trial sequence at the point where you want to wait for the participant's vocal response.
Important: Limitations and Accuracy
While convenient, a VOICE_KEY trigger is not always a precise measure of speech onset. Its reliability is affected by several factors:
For better accuracy, we strongly recommend recording the audio and identifying the precise voice onset time post-hoc using audio analysis software like Audacity or Praat.
Hardware & Software Setup
Your setup requirements depend on your operating system.
- Windows Requirements
On a Windows PC, you must use a separate, ASIO-compatible sound card supported by Experiment Builder. You will also need a microphone and potentially a preamplifier.- A list of supported sound cards and installation instructions can be found in the User Manual: Help > Contents > Installation > Windows PC System Requirements > ASIO Card Installation.
- A list of supported sound cards and installation instructions can be found in the User Manual: Help > Contents > Installation > Windows PC System Requirements > ASIO Card Installation.
- [macOS Requirements
On Mac computers, no special hardware is needed. Experiment Builder can use the default macOS audio drivers for recording.
- Experiment Builder Preferences
After installing any necessary hardware, you must tell Experiment Builder to use the correct driver.- In Experiment Builder, go to Edit > Preferences.
- Select the AUDIO tab.
- Set the Audio Driver option to ASIO (for Windows) or your preferred default driver (for macOS).
- In Experiment Builder, go to Edit > Preferences.
Basic Audio Recording
The RECORD_SOUND node controls the audio recording for each trial. There are two critical properties to the RECORD_SOUND node:
- File Name: To give each audio file a unique name with each trial, link the File Name property of the node to a column in your data source that contains the expected name for that trial.
- Duration: The default recording duration is 60 seconds (60000 ms). If your trials may last longer, increase the Duration property accordingly, adding a few extra seconds as a buffer.
Synchronizing Audio with Eye-Tracking Data
Accurately marking the exact start time of an audio recording in your eye-tracking data file requires a special technique.
The Timing Challenge
Normally, Experiment Builder can send a message to the EyeLink data file to mark when an action occurs. However, there is a small, variable delay between when the RECORD_SOUND node executes and when the audio hardware actually begins recording. A standard message would mark the command time, not the true start time, leading to synchronization errors.
To solve this, we can calculate the precise delay and send it as an "offset" along with our message. When you open the data in Data Viewer, it reads this offset and automatically shifts the message backward in time, placing it at the exact moment the audio recording began.
The message sent to the data file must follow this format, where [Offset] is the calculated delay in milliseconds:
Code:
MSG EDF_time [Offset] Message_text
For example, Data Viewer will place the following message at time 14108770 (which is 14108943 - 173):
Quote:MSG 14108943 173 RECORDING_STARTED
Implementation: Creating the Sync Message
You create and send this message using a SEND_EL_MESSAGE node.
Add a SEND_EL_MESSAGE node to your sequence. This node must be placed before the node that stops the audio recording.
In the "Value" property of this node, enter the following expression. This calculates the offset and formats the message correctly.
Code:
=str(int(round(@parent.parent.parent.DISPLAY.currentTime@-@RECORD_SOUND.recordStartTime@))) + " RECORDING_STARTED"
Breaking down the equation above:
- @...DISPLAY.currentTime: This gets the current time on the Display PC (when the message is being sent).
- @RECORD_SOUND.recordStartTime: This gets the exact time the audio recording began.
- The difference between these two values is the offset delay.
- str(int(round(...))): This function rounds the delay to the nearest millisecond and converts it into a string.
- + " RECORDING_STARTED": This appends your desired message text to the calculated offset.
Data Analysis
At the analysis stage, the RECORDING_STARTED message you created is the key to perfectly aligning your audio and eye-tracking data. In Data Viewer, you can use this message in two different ways.
- Use an Interest Period (IP) (Relative Timestamps)
This method is useful if you don't want to change the trial's original timeline but still want to see event times relative to the audio start. It's ideal for generating reports.- Create a new Interest Period that starts with the RECORDING_STARTED message.
- Apply the new Interest Period to your trial.
- In the Inspector window on the left, select the Preferences tab.
- Under the "Output / Analysis" section, find the Time Scale field and select IP Relative from its dropdown menu.
- Create a new Interest Period that starts with the RECORDING_STARTED message.
- Use a Reaction Time (RT) Definition (Aligns Time Zero)
This method resets the start of the trial (time zero) to the exact moment the audio recording began. This makes all event times in your trial directly aligned with the audio file's timeline.- In Data Viewer, go to Analysis > Reaction Time Manager....
- Click the "New RT Definition" icon to open the editor.
- In the Start Time Message Text field, enter RECORDING_STARTED.
- Save and apply the new RT definition.
- In Data Viewer, go to Analysis > Reaction Time Manager....
For more information please take a look at the "Synchronizing eye movements and audio recorded by Experiment Builder" video tutorial (see YouTube and direct download links below:
If your experiment recorded speech, you can add precise SPEECH_ONSET messages to your Data Viewer file. This allows you to analyze eye movements relative to the moment a participant began speaking. Here are two methods to do this.
- Manually Adding Messages (for a Few Trials)
This method is straightforward for marking a small number of trials one by one.
- Find the Onset Time
First, open a recorded .wav file in an audio editor like Audacity or Praat. Pinpoint the exact time in milliseconds (ms) when speech begins. For example, the speech onset might be at 718 ms.
- Prepare Your Data Viewer Trial
- Set an Interest Period: In Data Viewer, create an Interest Period (IP) that starts with your RECORDING_STARTED message. This makes the start of the IP align perfectly with the start of the audio file.
- Group Your Trials: To easily match trials with their audio files, group them by the audio file name variable. Go to Edit > Trial Grouping, select the relevant variable (e.g., audio_file_name), and regroup.
- Set an Interest Period: In Data Viewer, create an Interest Period (IP) that starts with your RECORDING_STARTED message. This makes the start of the IP align perfectly with the start of the audio file.
- Add the Speech Onset Message
- Select the trial you want to mark.
- In the middle panel of the Inspector window, right-click and choose Add New Message.
- Enter your message text (e.g., SPEECH_ONSET).
- From the time dropdown menu, select IP Time. This ensures the timestamp will be relative to the start of your Interest Period.
- Enter the speech onset time you measured in Step 1 (e.g., 718).
- Click "Enter" and repeat this process for all other relevant trials.
- Select the trial you want to mark.
- Find the Onset Time
- Batch Importing Messages (for Many Trials)
Manually adding messages can be time-consuming for large datasets. A much more efficient workflow is to create a formatted text file containing all your speech onset times and import it as a Message List.
This powerful feature allows you to add messages to all your trials at once. For detailed instructions on how to format and import a message list, please consult the Data Viewer manual:- Help > Contents > Working with Events, Samples and Interest Areas > Messages > Importing/Import Message List
- Help > Contents > Working with Events, Samples and Interest Areas > Messages > Importing/Import Message List
Setting up a voice key trigger to identify the start time of voice responses
This trigger fires as soon as the microphone signal crosses a pre-defined sound threshold. When it fires, it automatically writes a VOICE_KEY message into the EyeLink data file, marking the event time. To use it, simply place a VOICE_KEY trigger node in your trial sequence at the point where you want to wait for the participant's vocal response.
Important: Limitations and Accuracy
While convenient, a VOICE_KEY trigger is not always a precise measure of speech onset. Its reliability is affected by several factors:
- Hardware Dependencies: The trigger's sensitivity depends heavily on your microphone and any preamplifier hardware.
- Phonetic Sensitivity: The trigger is less reliable for quiet, voiceless initial sounds (like /s/ or /f/) compared to louder, voiced sounds (like /a/ or /b/). A voiceless sound might not cross the threshold immediately, causing a delayed or missed trigger.
For better accuracy, we strongly recommend recording the audio and identifying the precise voice onset time post-hoc using audio analysis software like Audacity or Praat.