Now Shipping! – The EyeLink 3; combined head and eye tracking at up to 1000 Hz.

Template: Synchronizing Eye Movements and Audio Recorded by Experiment Builder
#1
Synchronizing Eye Movements and Audio Recorded by Experiment Builder

Required Experiment Builder Version: 2.2.299 or higher
EyeLink Required: Yes
Type: Complete Example
Difficulty Level: Easy

This example illustrates:
  • How to use of Record Sound and Record Sound Control nodes to start and stop recording audio
  • How to save audio recording times
  • How to use a voice key trigger
Description:
Experiment Builder allows for audio recording during trials using a RECORD_SOUND node. On a Windows computer, recording audio requires a separate ASIO-compatible sound card supported by Experiment Builder as well as a microphone and in some cases a preamplifier. A list of the supported sound cards as well as installation instructions, can be found in the User Manual: open Experiment Builder, click on Help (from the top menu bar) -> Contents -> Installation-> Windows PC System Requirements -> ASIO Card Installation. Once you follow all installation steps the ASIO card can be used by Experiment Builder for audio recording.

On Mac computers, Experiment Builder can use the default macOS audio drivers and no additional audio card/ASIO hardware is required. 

When recording audio for each trial, the "File Name" property of a RECORD_SOUND node can be linked to the experiment's data source so a unique file name can be assigned to each .wav file generated by the audio recording in each trial. Also, the "Duration" property of the RECORD_SOUND node is set to 60 seconds (60000ms) by default. If you know in advance that your audio recordings can last more than 60 seconds increase the “Duration” value accordingly, allowing some extra few seconds just in case. After data collection, the recorded wav files can be found in the "recorded_audio" folder of the session's results folder. Audio files can later be opened in 3rd party audio-editing software such as Audacity or Praat for analysis.

Typically in Experiment Builder, if you fill in the "Message" property of an action node in the RECORDING sequence, you will have a marker in the EyeLink data file with the precise time when that action was carried out so eye movements can be synchronised with the action. When recording audio however, the precise time when the audio recording started is not known until some time after the RECORD_SOUND node is executed. Once the audio recording start time is known, a message can be placed at the correct time for Data Viewer using an offset value before the message. The attached example script (syncAudio.ebz) shows how a "RECORDING_STARTED" message marking the start of the audio recording can be written in the data file with an offset value via a SEND_EL_MESSAGE node. The offset is a number (in milliseconds) placed before the message and is calculated as the current Display PC time (the time the message is created) minus the "recordStartTime" property of the RECORD_SOUND node. When opening the eye movement data file in Data Viewer, this offset will be subtracted from the timestamp of the message so the message is placed at the correct time (i.e., at the edf time when the audio recording actually started). 

The message in the data file that Data Viewer is expecting therefore should have the following format:

MSG EDF_time  [Offset]  Message_text

[Offset] is an integer value which is subtracted from EDF time to generate the real message time. A positive offset value will place the message backwards in time.

For example, when Data Viewer parses the following message, it will be placed back in time by 173ms at EDF time 14108770:

MSG 14108943 173 RECORDING_STARTED

In order to replicate the equation used by the SEND_AUDIO_START_MESSAGE_WITH_OFFSET node (found inside the RECORDING sequence of the syncAudio.ebz project) do the following: in the "Edit Attribute" window start by putting an equals sign, followed by Devices > DISPLAY > Current Time, followed by a minus sign, followed by RECORD_SOUND > Record Start Time, put a plus sign followed by open quotes, space, RECORDING_STARTED, close quotes. Place the initial part of the equation @parent.parent.parent.DISPLAY.currentTime@-@RECORD_SOUND.recordStartTime@ within str, int and round brackets so the full equation looks like: =str(int(round(@parent.parent.parent.DISPLAY.currentTime@-@RECORD_SOUND.recordStartTime@))) + " RECORDING_STARTED"

Data Analysis
At the analysis stage, the "RECORDING_STARTED" message in the data file allows you to create a Reaction Time definition in Data Viewer by entering the "RECORDING_STARTED" message in the "Start Time Message Text" field of the "Reaction Time Definition Editor" so eye movements (together with other messages sent by Experiment Builder) and the audio recording wav file will be aligned: once the reaction time is applied, time zero of the eye movement data and messages will coincide with the start of the audio recording. This way one can easily identify what eye movements or messages coincide with specific segments in the audio file and vice versa.
Instead of creating a Reaction Time definition, if you are using a recent version of Data Viewer you can also create an Interest Period starting with the "RECORDING_STARTED" message, then click on the "Preferences" tab in the Inspector window on the left, click on "Output / Analysis" in the top panel of the Inspector and choose "IP Relative" in the drop-down menu beside the "Time Scale" field in the bottom panel. This will let you output reports with the time of messages and events relative to the time when audio recording started.

For more information please take a look at the "Synchronizing eye movements and audio recorded by Experiment Builder" tutorial:

Youtube link

Direct download

Setting up a voice key trigger to identify the start time of voice responses
Experiment Builder can use a VOICE_KEY trigger node, which fires whenever a sound threshold (picked up by the microphone) is exceeded and which can be used to identify the start time of voice responses. You can set up your experiment as described above, then place a VOICE_KEY trigger node when you want to wait for the start of a vocal response (see attached syncAudio_voicekey.ebz for an example). The VOICE_KEY trigger will write a VOICE_KEY message in the EyeLink data file whenever the VOICE_KEY node fires. Voice key sensitivity will depend on the microphone and any preamplifier hardware used. If the VOICE_KEY trigger does not appear to be sensitive enough, click on the VOICE_KEY node and put a checkmark beside the “Below Threshold” option to improve the triggering sensitivity. Note that VOICE_KEY trigger sensitivity can be influenced by the properties of the speech sounds. For instance, if the initial sound is voiced the trigger tends to be more accurate compared to voiceless initial sounds, which might not trigger it. A VOICE_KEY trigger therefore might not be an accurate representation of when the participant starts a vocal response. For better start time accuracy it best to record audio and identify the voice onset post-hoc using 3rd party audio analysis software such as Audacity and Praat.

Instructions:
  1. Download the syncAudio.ebz or syncAudio_voicekey.ebz example from this message.
  2. Launch the Experiment Builder application.
  3. Unpack the syncAudio.ebz or syncAudio_voicekey.ebz file to a location on your Experiment Builder PC with "File menu -> Unpack".
  4. Open the project in Experiment Builder.
  5. Deploy the project to a new folder.
  6. Run the syncAudio.exe or syncAudio_voicekey.exe from the deployed directory.

syncAudio.ebz
syncAudio_voicekey.ebz




Attached Files
.ebz   Synchronizing_Eye_Movements_and_Audio_Recording.ebz (Size: 584.57 KB / Downloads: 140)