FAQ: How do I synchronise eye movements and audio recorded by Experiment Builder?
#1
Experiment Builder allows for audio recording during trials using a RECORD_SOUND node. On a Windows computer, recording audio requires a separate ASIO-compatible sound card supported by Experiment Builder as well as a microphone and in some cases a preamplifier. A list of the supported sound cards as well as installation instructions, can be found in the User Manual: open Experiment Builder, click on Help (from the top menu bar) -> Contents -> Installation-> Windows PC System Requirements -> ASIO Card Installation. Once you follow all installation steps the ASIO card can be used by Experiment Builder for audio recording. In your Experiment Builder task, click on Edit > Preferences > AUDIO and select "ASIO" in the "Audio Driver" option.

On Mac computers, Experiment Builder can use the default macOS audio drivers and no additional audio card/ASIO hardware is required. 

When recording audio for each trial, the "File Name" property of a RECORD_SOUND node can be linked to the experiment's data source so a unique file name can be assigned to each .wav file generated by the audio recording in each trial. Also, the "Duration" property of the RECORD_SOUND node is set to 60 seconds (60000ms) by default. If you know in advance that your audio recordings can last more than 60 seconds increase the “Duration” value accordingly, allowing some extra few seconds just in case. After data collection, the recorded wav files can be found in the "recorded_audio" folder of the session's results folder. Audio files can later be opened in 3rd party audio-editing software such as Audacity or Praat for analysis.

Typically in Experiment Builder, if you fill in the "Message" property of an action node in the RECORDING sequence, you will have a marker in the EyeLink data file with the precise time when that action was carried out so eye movements can be synchronised with the action. When recording audio however, the precise time when the audio recording started is not known until some time after the RECORD_SOUND node is executed. Once the audio recording start time is known, a message can be placed at the correct time for Data Viewer using an offset value before the message. The attached example script (syncAudio.ebz) shows how a "RECORDING_STARTED" message marking the start of the audio recording can be written in the data file with an offset value via a SEND_EL_MESSAGE node. The offset is a number (in milliseconds) placed before the message and is calculated as the current Display PC time (the time the message is created) minus the "recordStartTime" property of the RECORD_SOUND node. When opening the eye movement data file in Data Viewer, this offset will be subtracted from the timestamp of the message so the message is placed at the correct time (i.e., at the edf time when the audio recording actually started).

The message in the data file that Data Viewer is expecting therefore should have the following format:

MSG EDF_time  [Offset]  Message_text

[Offset] is an integer value which is subtracted from EDF time to generate the real message time. A positive offset value will place the message backwards in time.

For example, when Data Viewer parses the following message, it will be placed back in time by 173ms at EDF time 14108770:

MSG 14108943 173 RECORDING_STARTED

In order to replicate the equation used by the SEND_AUDIO_START_MESSAGE_WITH_OFFSET node (found inside the RECORDING sequence of the syncAudio.ebz project) do the following: in the "Edit Attribute" window start by putting an equals sign, followed by Devices > DISPLAY > Current Time, followed by a minus sign, followed by RECORD_SOUND > Record Start Time, put a plus sign followed by open quotes, space, RECORDING_STARTED, close quotes. Place the initial part of the equation @parent.parent.parent.DISPLAY.currentTime@-@RECORD_SOUND.recordStartTime@ within str, int and round brackets so the full equation looks like: =str(int(round(@parent.parent.parent.DISPLAY.currentTime@-@RECORD_SOUND.recordStartTime@))) + " RECORDING_STARTED"

The SEND_AUDIO_START_MESSAGE_WITH_OFFSET node needs to be placed before the RECORD_SOUND_CONTROL node which stops audio recording. In other words, the audio recording start time needs to be identified before audio recording ends for that trial.

Data Analysis

At the analysis stage, the "RECORDING_STARTED" message in the data file allows you to create a Reaction Time definition in Data Viewer by entering the "RECORDING_STARTED" message in the "Start Time Message Text" field of the "Reaction Time Definition Editor" so eye movements (together with other messages sent by Experiment Builder) and the audio recording wav file will be aligned: click on "Analysis > Reaction Time Manager..." (from the top menu bar) and click on the "New RT Definition" icon. Once you create the reaction time definition and apply it, time zero of the eye movement data and messages will coincide with the start of the audio recording. This way one can easily identify what eye movements or messages coincide with specific segments in the audio file and vice versa.

Instead of creating a Reaction Time definition, if you are using a recent version of Data Viewer you can also create an Interest Period starting with the "RECORDING_STARTED" message (click on the "Full Trial Period" drop-down menu at the top of Data Viewer and select "Edit", then click on the "New Interest Period" icon). Once the Interest Period is applied click on the "Preferences" tab in the Inspector window on the left, click on "Output / Analysis" in the top panel of the Inspector and choose "IP Relative" in the drop-down menu beside the "Time Scale" field in the bottom panel. This will let you output reports with the time of messages and events relative to the time when audio recording started.

For more information please take a look at the "Synchronizing eye movements and audio recorded by Experiment Builder" video tutorial (see YouTube and  direct download links below:

Youtube link

Direct download

If your experiment recorded speech, and if you want to identify/mark speech onset times in Data Viewer you can follow these steps:

1) Open each recorded wav file using 3rd party audio software such as Audacity or Praat. The example image below shows an audio file with speech onset at 718ms from the time audio recording started.

   

2) In Data Viewer set an Interest Period starting from the "RECORDING_STARTED" message and ending with the DISPLAY_BLANK message, by clicking on the "Full Trial Period" drop-down menu at the top of Data Viewer and select "Edit", then click on the "New Interest Period" icon. Apply the new Interest Period by selecting it from the drop-down menu at the top of Data Viewer.

3) In Data Viewer group all trials by the variable which codes for the recorded audio file name e.g., using the "audio_file_name" variable in the synchAudio demo: click on Edit > Trial Grouping, move the variable to the “Selected Variables” list and click on "Regroup". All trials will now be grouped by audio file name e.g., a, b, c etc. representing the recorded audio files a.wav, b.wav, c.wav. This way in the Inspector window on the left you can clearly identify which audio file is linked to which trial of which participant.

4) Click on a trial in the top panel of the Inspector window to select it. Right-click in the middle panel of the Inspector and choose "Add New Message". In the new message window add a message e.g. SPEECH_ONSET, choose "IP Time" from the drop-down menu (so the message time will be relative to the start of the Interest Period) and enter the voice onset time relative to the audio recording start time e.g. 718. Click "Enter" to create the message. Repeat this step for all relevant trials.

   

5) Instead of steps 2 -> 4 above you can create a message list with all speech onset times for each participant following the instructions in the Data Viewer manual: in Data Viewer click on Help > Contents to open the manual and navigate to the section Working with Events, Samples and Interest Areas > Messages > Importing/Import Message List.


Setting up a voice key trigger to identify the start time of voice responses

Experiment Builder can use a VOICE_KEY trigger node, which fires whenever a sound threshold (picked up by the microphone) is exceeded and which can be used to identify the start time of voice responses. You can set up your experiment as described above, then place a VOICE_KEY trigger node when you want to wait for the start of a vocal response (see attached syncAudio_voicekey.ebz for an example). The VOICE_KEY trigger will write a VOICE_KEY message in the EyeLink data file whenever the VOICE_KEY node fires. Voice key sensitivity will depend on the microphone and any preamplifier hardware used. Note that VOICE_KEY trigger sensitivity can be influenced by the properties of the speech sounds. For instance, if the initial sound is voiced the trigger tends to be more accurate compared to voiceless initial sounds, which might not trigger it. A VOICE_KEY trigger therefore might not be an accurate representation of when the participant starts a vocal response. For better start time accuracy it best to record audio and identify the voice onset post-hoc using 3rd party audio analysis software such as Audacity and Praat.


Attached Files
.ebz   syncAudio.ebz (Size: 36.97 KB / Downloads: 44)
.ebz   syncAudio_voicekey.ebz (Size: 37.38 KB / Downloads: 27)