For the 3T we have the Cambridge Research Systems LiveTrack monocular system available.
Replace the standard headcoil mirror with the white mount from the cabinet in the MR room. Tighten the two screws a bit so the mount is connected to the top half of the headcoil, but still allow for sliding movement for fine adjustments. Be careful when disconnecting and connecting the top half, make sure the camera/mirror mount doesn't fall off. Connect the cable to the one coming from the front penetration panel. When plugging it in, make sure ther two red dots are aligned on each side of the connector. It should slide in smoothly and click, don't force it. Make sure both monitors are connected to the 3t-stim computer and enabled. If one is not enabled (typically the left one), rightclick in the desktop and open NVidia control panel. Find the tab that shows all connected monitors and enable all three. Monitor 1 (the left display) should be set to primary, monitor two (right) and three (BOLD screen) should be clones and secondary. This arrangement is essential to being able to work with the system, other arrangements will not work. Start the LiveTrack AV software (link is on the desktop) on put it on the secondary screen so it is also visible on the cloned BOLD screen in side the MR room. You should see input from the camera.
Prepare the participant by removing mascara and replacing glasses for MRI glasses. Soft contact lenses can be worn during an eye tracking experiment, but rigid contact lenses may interfere with the corneal reflection (CR) and thus need to be switched for MRI glasses.
When the participant is placed inside the scanner, make sure that the eyes and eyebrows of the participant are not covered by the head coil, as the head coil may cast a shadow over the eyes and that interferes with the experiment. When you slide the participant in the scanner make sure you align on the eyebrows (to make sure every participant is more or less in the same position).
After the participant is placed in the bore and the camera is setup correctly, look at the BOLD screen to see if the participant's eyes are in the centre of the camera FOV. If this is not the case, adjust the camera. Ask the participants if he or she can see all corners of the screen. Under most circumstances, the LiveTrackAV camera and illuminator should be more or less set up correctly and you should see the participant's eye in the camera's FOV. If the eye is not (properly) visible or illuminated, gently move the camera to get the eye in the middel of the camera's FOV. Carefully adjust the focus of the camera so that the size of the CR becomes as small as possible and the aye and eyelashes are in focus. If you are unsure how to do this, please contact the staff of the Spinoza Centre.
Calibrating direction of gaze coordinates is easy and takes less than a minute with the LiveTrack Viewer software utility. The geometry of your setup is inferred by having the participant fixate on a sequence of nine dots presented at known locations on the stimulus display. Just input the viewing distance between the participant's eyes and the stimulus display, then instruct them to locate the targets as they appear one at a time. When the eye rotations to all nine locations have been measured, the utility calculates a calibration matrix which determines the relationship between eye rotation in camera coordinate space and screen position in degrees for all subsequent measurements. The matrix is uploaded to the LiveTrack AV signal processing unit and automatically applied to the tracking data that is generated by the unit. After the initial calibration has been performed, a very simple drift correction can be performed between each trial using a single, one-dot calibration that takes just a few seconds to complete. Must I use the LiveTrack Viewer for calibration? Absolutely not! You are free to implement your own custom calibration. We supply template scripts below that use Psychtoolbox-3 and/or MATLAB to explain how to create a custom calibration.
It is possible to capture the raw analog video feed (unprocessed by the LiveTrack AV hardware). A USB video capture device sends the analog (NTSC) video signal to the 3t-physio computer (in the back corner furthest from the scanner). The video can be recorded using honestech TVR 2.5 (shortcut on the desktop). As of yet, no solution is available to record the video feed along with the scanner triggers. Matlab seems an obvious candidate to implement this in.
Quit the LiveTrack AV software on the stimulus computer. Disconnect the LiveTrack AV camera, unscrew the mount from the headcoil and place it back in the cabinet. Put the lens cap back on the camera.
CRS LiveTrack Matlab/Psychtoolbox demo scripts
Matlab example script from Joris Coppens (NIN)
For the 7T we have the SR Research Eyelink 1000 Plus system available. It has been mounted onto a hot-mirror system, which greatly improves the fidelity and stability of the recordings.
Important:
These components are fragile, expensive, and touching them will degrade signal quality. When moving the eye-tracking system, only grab the black side bars that hold the hotmirror.
Connect the wires:
Find the top battery, check whether the battery is sufficiently charged (press test button). If the battery is not fully charged it will probably not last a full hour. Bring the battery into the MR system room. Plug in the two power cables and turn the battery on.
The EyeLink software will give an error message during start-up if the battery is not connected and turned on. After placing the battery, remove the lens cap from the EyeLink camera. Don't forget to turn on the BOLD screen.
Turn on the EyeLink monitor and select DVI as input (by default it is on Display Port, which is for the stimulus computer). Then turn on the EyeLink computer and choose EyeLink mode. You should now see the EyeLink graphical user interface (GUI).
Also turn on the stimulus computer and start up the Track software from the desktop shortcut. Keep the standard EDF filename (SDEMO), and hit enter twice so that the camera image is visible on the BOLD screen. Make sure that you are viewing the overview (zoomed-out) image by pressing the arrow-keys on the stimulus computer.
Important: the EyeLink setup does not return to defaults after shutting down the EyeLink. Make sure that all settings are correct before starting your experiment. For some default settings, see the Eyelink Settings below.
Prepare the participant by removing mascara and replacing glasses for MRI glasses. Soft contact lenses can be worn during an eye tracking experiment, but rigid contact lenses may interfere with the corneal reflection (CR) and thus need to be switched for MRI glasses.
When the participant is placed inside the scanner, make sure that the eyes and eyebrows of the participant are not covered by the head coil, as the head coil may cast a shadow over the eyes and that interferes with the experiment. When you slide the participant in the scanner make sure you align on the eyebrows (to make sure every participant is more or less in the same position). Don't forget to exchange the regular mirror (grey backside) for the first surface mirror (mirroring backside). Important: do not touch the first surface mirror with your bare hands, as this irreversibly damages the mirror.
After the participant is placed in the bore and the camera is setup correctly, look past the camera to see if the participant's eyes are in the centre of the mirror. If this is not the case, adjust the mirror. Ask the participants if he or she can see all corners of the screen. Under normal circumstances, the EyeLink camera and illuminator have already been set up correctly and you should see the participant's eye in approximately the centre of the overview image, after which you can move on to step 3. If the eye is not (properly) visible or illuminated, first check whether the feet of the BOLD screen tripod are on the marked locations. If they are and the eye is still not in the image, please contact the staff of the Spinoza Centre so that they may realign the EyeLink camera and illuminator.
Make sure that the 'threshold coloring' option is turned on. Click on the pupil on the EyeLink computer and switch to the zoomed-in view on the stimulus computer by pressing the arrow-keys. Carefully adjust the focus of the camera so that the size of the CR becomes as small as possible. This means that the pupil is in focus. Press A (or auto threshold) on the EyeLink computer to adjust the thresholds, and inspect the values under the image. The entire pupil should be coloured dark blue, and the pupil value (P) should be above 70. The CR value should be as small as possible. If too much light enters the pupil, part of the pupil may not be dark blue anymore, and you will see question marks behind the P value. If this is the case, adjust the lens of the illuminator slightly. A red P or CR value means the software cannot determine the position of the pupil and/or CR, which may often be solved by adjusting the position of the mirror. Another cause may be that the participant is wearing mascara. If neither are the case, the illuminator alignment needs to be adjusted. If you are unsure how to do this, please contact the staff of the Spinoza Centre.
Check your sample rate At the MRI scanner, lowering the sample rate to less than 2K can help with the stability of the eyetracker.
Set the desired calibration and validation settings. It is recommended to perform calibration in randomized order. Instruct the participant to focus on the centre of the calibration points and not to change their gaze until the calibration point disappears. Note that even when you're not doing an eye tracking experiment, but are only monitoring pupil size, it is still recommended to do at least a three-point calibration.
Start calibration by pressing the C button or clicking the calibrate button. The first calibration point needs to be manually accepted by the researcher by pressing the spacebar. On the screen you will see a small green triangle, which moves if the participant moves their eyes. Wait for the triangle to be completely still and then press the spacebar to start calibration. Whenever the participants gaze reaches a calibration point, a green cross will appear on the EyeLink image. For a 9-point calibration you want these crosses to form a perfect grid. 9-point calibration is standard, but more difficult subjects can be given a 6-point or 3-point calibration. Turn off 'Manual Accept Fixation' if you want the fixation point to move automatically once the position is registered. Either accept the calibration by pressing enter or clicking accept, or move on to validation.
Calibration improves when:\
After calibrating, start validation by pressing V or clicking the validate button. The first validation point needs to be manually accepted by the researcher by pressing the spacebar. Every time the participant's gaze reaches a validation point, a value (degrees of deviation) appears on screen. You want these values to all be below 1; if this is not the case, recalibrate.
EDF file names cannot exceed 8 characters,
Which means that your SUBJECT NAME cannot exceed 8 characters! If it does, you will get an error when the EDF file is transferred to the Presentation computer.
Quit the Track software on the stimulus computer by repeatedly hitting the escape button. Quit the EyeLink software by clicking 'Exit EyeLink', which returns you to MS-DOS, after which you can turn of the EyeLink computer by pressing the power button.
For a full overview of possible EyeLink settings, please refer to the EyeLink 1000 manual, which may be found on the leftmost red shelve. The settings discussed here are settings that should generally not be adjusted.
Alignment of the EyeLink camera and illuminator should only be performed by experienced EyeLink user or the Spinoza Centre staff. The hot-mirror setup should make it unnecessary to do anything other than refocusing of the camera lens.
Further adjustments to camera or illuminator should only be made under guidance of an experienced user of the eyetracking system. In case of doubt, contact (Tomas Knapen) or a member of the Knapen lab.
The EDF files are not in a readable format. In order to use them, you must convert them to ASC. This must be done from the command line on a computer with EyeLink installed.
The code for this conversion is in the Examples folder of the EDF Access API folder. C:\Program Files (x86)\SR Research\EyeLink\EDF_Access_API\Example
Start-> RUN -> CMD -> set CD to EDF Acces API folder (copy paste directory), which contains edf2asc.exe.
To get a list of all arguments for this function, just type edf2asc
edf2asc --s locationofedffile\filename.edf (-s gives all samples)
edf2asc --e locationofedffile\filename.edf (-e gives all events)
In between these two command lines make sure to rename the .asc file (eg. filename_s.asc for sample file and nameoffile_e.asc for event file) before converting the next one, otherwise the .asc file will be overwritten.
The Knapen lab have created a python-based package which wraps the edf2asc utility and preprocesses gaze and pupil data, producing cleaned and parsed hdf5-based outputs. hedfpy
Check out the example file "track.sce" for an example of using Presentation scripts with EyeLink.
A copy of this code is in the PresLink user guide.
Most of the code can remain, but you must make sure to specify when you want to send a message to the eyetracker EDF file. For example, at the start and end of a trial.
tracker.send_message("some string");
Note: EDF file names cannot exceed 8 characters.