You are on page 1of 31

3D Video-Oculography with Dual-Cameras

MANUAL FOR USER & DEVELOPER

Sunu Wibirama contact: sunu@jteti.gadjahmada.edu

Collaborative Research Partner :

Universitas Gadjah Mada

King Mongkuts Institute of Technology Ladkrabang

Asean University Network/ Southeast Asia Engineering Education Development Network

I.

Introduction

The measurement device most often used for measuring or tracking eye movements is generally known as an eye movements tracking system. Recently, growing research developments on diagnostic eye movements tracking system have been focused on 3D eye movements tracking using video processing. We have designed a 3D eye movements tracking and visualization system using dual cameras system mounted on a consumergrade welding goggle. The dual cameras system consists of two parallel mini CCD cameras. The analog to digital image converter has been used on each camera to convert captured frame for image processing purpose. The algorithm for image processing can be explained as follows: for first time usage, camera calibration procedure using chessboard pattern must be applied on each camera to gain camera parameters and lens distortion coefficients. Next, eyeball radius is obtained by direct measurement. Iris striation is gained from captured eye image. The dual cameras system is used to capture simultaneously two images of one eye. Pupil detection based on center of mass algorithm is implemented to obtain the 2D coordinates of pupil center. Template matching based on correlation-coefficient is then implemented to track iris striation location on each image frame. The 2D coordinates of pupil center and iris striation from each camera are then submitted to 3D coordinates extraction algorithm based on Direct Linear Transformation (DLT). The results of 3D coordinates extraction process are used to determine horizontal, vertical, torsional movements of the eye. Finally, we represent the real-time data in both eye movements trajectory and 3D visualization. Technical research papers related to this manual are:
S. Wibirama, S. Tungjitkusolmun, and C. Pintavirooj , Dual-Camera Acquisition for Accurate Measurement of Three-Dimensional Eye Movements, in IEEJ Transactions on Electrical and Electronic Engineering (Indexed by Scopus, ISI), John Wiley & Sons Inc., Vol.8, 2012. Download Manuscript S. Wibirama, S. Tungjitkusolmun, and C. Pintavirooj, Real-time 3D Eyeball Visualization on Stereo Eye Movements Tracking System, in The 3rd Regional Conference on ICT Applications for Industries and Small Companies in ASEAN Countries (RCICT 2011), pp. 127-131. Laos, 2011.Download PDF S. Wibirama, S. Tungjitkusolmun, C. Pintavirooj, and K. Hamamoto, Real Time Eye Tracking using Initial Centroid and Gradient Analysis, in Proceeding of The 6th IEEE International Conference of Electrical Engineering/Electronics, Computer, Telecommunications, and Information Technology (ECTI). ISBN:978-1-4244-3388-9. Pattaya, Thailand. 6-8 May 2009, pp. 10581061.Download PDF S. Wibirama, S.Tungjitkusolmun, C. Pintavirooj, and K. Hamamoto, Dual Cameras Acquisition For Three-Dimensional Eye-Motion Tracking in Proceeding of 2nd Biomedical Engineering International Conference (BMEiCON 2009), Thailand, 2009, pp.18-19. Download PDF

II. Software Installation 1. The 3D eye movements tracking and visualization system was primarily developed using Borland C++ Builder 6 and run on Windows XP operating system. Therefore, before developing or modifying the system, Borland C++ Builder should be installed in the computer. Insert the installer and follow the instructions. (*please ask laboratory staff to obtain this software)

Figure 1. Welcome screen of Borland C++ Builder Installer 2. Next, download OpenCV version Beta 5a (b5a) http://sourceforge.net/projects/opencvlibrary/ and install the executable Download highgui2.zip, a borland-based highgui library for OpenCV http://wibirama.com/ngaji/data/OpenCV/Installer/highgui2.zip and extract it. the highgui2 folder to C:\Program Files\OpenCV. from files. from Copy

Figure 2. Welcome screen of OpenCV version Beta 5a installer 3. Next, copy all *.dll files inside folder C:\Program Files\OpenCV\bin and folder C:\Program Files\OpenCV\highgui2\bin to C:\Program Files\Windows\System32

4. Install VideoOCX Library (*please ask laboratory staff to obtain this software) This library will be used to capture real-time image sequences from the camera on the online tracking mode. Open the installer and follow the instructions.

Figure 3. Welcome screen of VideoOCX 5. Next, OpenGL and GLUT libraries must be installed to be able to visualize the 3D eye movements (*please ask laboratory staff to obtain this software) 6. Open OPENGL-SUNU folder and find executable file called opengl95.exe. It will extract several *.dll and *.h files. Copy opengl32.dll to C:\Windows\System32. Copy gl.h, glu.h, and glaux.h to C:\Program Files\Borland\CBuilder6\include\gl. If there are same files inside the gl folder, you can rename the old files before copying the new gl.h, glu.h, and glaux.h files. Inside OPENGL-SUNU, you can find glut-3.7.6-bin.zip. Extract the glut-3.7.6-bin.zip, and find glut.h, accordingly. Copy glut.h to C:\Program Files\Borland\CBuilder6\include\gl 7. 8. Copy ext.h and ext.c to C:\Program Files\Borland\CBuilder6\include. GLUT is OpenGL Utility Toolkit, an independent window system to create OpenGL application. This library utilizes simple API (Application Programming Interface) for writing OpenGL program. Extract Glut_libs.zip. Copy all files inside Glut_libs.zip to C:\Program Files\Borland\CBuilder6\libs, which are: - opengl32.lib - glu32.lib - glut32.lib - glut.lib - winmm.lib

If you prefer to create a new project using Borland C++ Builder, you should firstly configure your project in order to make it works with VideoOCX, OpenCV, and OpenGL libraries. To configure the new project, do the following steps: 1. Create a new Windows Application Project of Borland C++ Builder. 2. Install VideoOCX by choosing Component > Import ActiveX Control. A small window will appear. Choose VideoOCX as well as VideoOCXTools and click Install button.

Figure 4. ImportingVideo OCX components 3. If your installation is correct, you will see these two under ActiveX tab. buttons

4. Next, you should integrate OpenCV with your Borland C++ Builder project. Open the project options by choosing Project > Options. A small window will appear as shown in Figure 5 below. Choose Directories/Conditionals tab, and look into Include path.

Figure 5. Configuring include path of OpenCV 5. Inside the Include path, insert all paths below: C:\\Program Files\OpenCV\cv\include C:\\Program Files\OpenCV\cvaux\include C:\\Program Files\OpenCV\cxcore\include C:\\Program Files\OpenCV\highgui2 C:\\Program Files\OpenCV\otherlibs\cvcam\include C:\\Program Files\OpenCV\otherlibs\highgui C:\\Program Files\OpenCV\highgui2\otherlibs\highgui2 and click OK to save the path. 6. Choose View > Project Manager. Choose Project1.exe (or <yourprojectname>.exe). Rightclick and add several *.lib files from C:\Program Files\highgui2\lib : - cv.lib - cvaux.lib - cxcore.lib - cvcam.lib - highgui.lib - highgui2.lib

Figure 6. Adding some library files into the project 7. Include the *.h files in your unit1.h file: #include<cv.h> #include<highgui.h> #include<cvcam.h> #include<cxcore.h> #include<cvaux.h> 8. To integrate OpenGL and your Borland C++ Builder, you should add glut32.lib to your project. Choose View > Project Manager. Right-click on the Project1.exe (or <yourprojectname.exe>). Add glut32.lib from C:\Program Files\Borland\CBuilder6\libs. Insert the following codes in the beginning of unit1.h #include <windows.h> #define GLUT_BUILDING_LIB #include <gl/gl.h> #include <gl/glu.h> #include <gl/glaux.h> #include <gl/glut.h>

To install the 3D Eye Tracking program and source code, please use the installer, which is located inside the etkmitl installer folder. Double-click the setup.exe file and follow the installation instructions.

Figure 7. Adding some library files into the project 9. If the installation is correct, your eye tracking source code will be located in C:\Program Files\ETKMITL. You should locate your video data inside the C:\Program Files\ETKMITLvideo. If you cannot find ETKMITLvideo folder, you should create it manually. 10. The Borland C++ Builder project can be opened by choosing Project1.bpr. If the project is compiled, it will result the executable file Project1.exe, as shown in Figure 8 below. If you use Windows Vista or Windows 7, make sure that you run the program as Administrator.

(b) (a) Figure 8. 3D Eye Tracking project: (a) Windows XP/2000 and (b) Windows 7 / Vista

III. Hardware Installation

Figure 9. Eye tracking hardware and calibration pattern In this subsection, hardware installation is explained. The 3D Eye Tracking system consists of several hardware as shown in Figure 9. The required hardware are: (1) Easycap analog to digital video converter; (2) Video Composite (RCA) extension cable for both Easycap; (3) Two AC/DC adapters for dual cameras system, and one AC/DC adapter for infrared LED; (4) Eye tracking goggle and hands-free cable holder (5) Calibration pattern First, connect the video composite cable of Easycap to extension cable. The extension will provide sufficient distance for the patient. Do the cable installation for camera 2, as shown in Figure 10. Second, connect the video composite of camera to the other tip of extension cable such that the camera is connected to the Easycap as shown in Figure 11. Camera 1 is the righthand side camera while the camera 2 is the left-hand side camera from the patient point of view. Furthermore, connect the cable of two AC/DC adapters to power supply cable of both cameras. Next, plug the Easycap to the USB port on the computer. In order to allow Easycap works properly, the Easycaps driver must be installed before. Note that camera 1 and camera 2 shall be connected to the USB port number 1 and number 2, respebctively. Figure 12 shows the Easycap which relates to camera 1 is connected through the USB port number 1.

(a) (b) Figure 10. Connection of Video Composite (RCA) cables on (a) Camera 1 and (b) Camera 2

Figure 11. Cable installation: from Easycap to camera

Figure 12. Cable installation: connecting Easycap to PC

10

Figure 13. Connecting infrared power supply

Figure 14. Camera and infrared LED power AC/DC adapters

Figure 15. Installation of hands-free cable holder

Next, connect the infrared LED cable to AC/DC adapter as shown in Figure 13. The camera and infrared LED adapters are then plugged to the AC power supply in order to power the cameras and the infrared LED with DC current. Finally, install the hands-free

11

cable holder on the upper part of patients arm before wearing the eye tracking goggle, as shown in Figure 15. IV. General Profile of 3D Eye Tracking Software

(a) (b) Figure 16. (a) Camera View and (b) Trajectory View of 3D Eye Tracking Software Figure 16 shows the Main Window of 3D Eye Tracking Software. Figure 16 (a) shows the default Camera View when the software is activated. The Camera View consists of four main parts. Part (1) and Part (2) display images from camera 1 and camera 2 in online tracking mode when the cameras are activated simultaneously. Part (3) is tracking modul, consists of Camera Setting modul, Camera Calibration modul, and Tracking and Visualization modul. The left-side of part (4) shows results of 3D eye movements tracking in horizontal, vertical, and torsional directions (measured in degrees) while the right-side of part (4) shows the thresholding value, the size of pupil area, and the number of captured frames.

12

Figure 17. Changing Camera View to Trajectory View To change Camera View onto Trajectory View, click the Trajectory tab located on the topright of the software as shown by Figure 17. The Trajectory View shows three different trajectories: horizontal, vertical, and torsional movements. Figure 18 shows a screen capture of horizontal trajectory. The x and y axes are the measured angle (in degrees) and captured frames, respectively.

Figure 18. Horizontal trajectory a. Camera Setting Modul The first modul is Camera Setting. This modul is used to configure the camera that plugged to computer. A preview of Camer Setting modul is shown in Figure 19. Each camera has its own configuration panel. Each panel consists of several buttons as shown by Figure 20 (a).

13

Figure 19. Preview of Camera Setting modul

(b) (a) Figure 20. (a) Configuration Panel ; (b) Convertion Factor Panel [1] Set Button Set button is used to configure Windows driver that being an interface between the operating system and the camera. When this button is clicked, the software shows a small window presenting information about the driver as shown in Figure 21.

14

Figure 21. Driver selection window [2] Start Button Start button is used to initiate the camera. When this button is clicked, typically it shows several connected camera on the computer. In this research, we use mini CCTV camera that plugged into Easycap Analog to Digital Converter. The video device driver that works with Easycap is marked by Syntek STK1160. Choose this device, click Apply, and OK.

Figure 22. Selecting video source from Start button [3] Stop Button Stop button is used to end capturing images from camera. If you wish to turn off the camera, click Stop button. [4] Format Button Format button is used to configure video stream. You can choose various video resolutions and compuressions which are suitable for your need.

15

Figure 23. Selecting video resolution and compression [5] Info Button Info button shows several information about current captured frames. This information is based on your configuration.

Figure 24. Info button

16

[6] Source Button

Figure 25. Video-type selection Source button is used to configure the type of video (NTSC and PAL), capture source (Video Composite and S-Video), Video Device, and Device Settings. Device settings will highly dependent to the driver of camera. Some cameras provide more options compared to the camera that used in this research. Generally, you will be able to configure brightness, contrast, hue, and saturation of image captured from your camera.

Figure 26. Capture-source selection

17

Figure 27. Device settings

[7] Capture Button In this research, the iris striation is defined by cropping the image manually from camera 1. In order to crop the iris striation template, the software is equipped by Capture button. Activate Define Iris Striation checkbox, and press Capture button. By using your mouse pointer, drag an area that will be used as a template. The software will automatically crop the area as a template and save to a file named TempLeft1.bmp located in C:\Program Files\ETKMITL\data1. This template will be used to track iris striation location on both camera 1 and camera 2 images.

Figure 28. Cropping iris striation template using Capture button

18

[8] Conv.Factor Button

Figure 29. Measuring iris diameter in pixels unit from camera 1 To measure the iris diameter in pixels unit using DLT algorithm, you have to measure the length of horizontal line connecting the left and right edges of the iris. To do so, press Conv.Factor button to capture the image of eyeball, and use your mouse pointer to drag horizontal line from left to right edges. The x and y pixel values will be shown in the program as pointed by the red arrow in Figure 29. [9] Save Iris Points Button This button is used to save the Iris Limit Points resulted from previous step. The x and y values will be stored in a file named iris_point1.txt in C:\Program Files\ETKMITL. Similarly by doing on camera 2, you will get iris_point2.txt inside the same folder. [10] Convert button On the right-most part of Camera Setting modul, you will se Convert button. This button is used to convert eye radius from millimeters unit to pixels unit. [11] Save button The convertion is then saved by using Save button. You will get a file named eye_radius.txt inside C:\Program Files\ETKMITL folder. This file contains information of measured eye radius in pixels unit. [12] Capture All Cam button This button is used to capture images from camera 1 and camera 2 simultaneously. Generally, this button is used only in camera calibration procedure. By capturing two images of chessboard pattern simultaneously, we can then extract camera parameters. This step will be explained in the next section.

19

b. Camera Calibration Modul

Figure 30. Preview of Camera Calibration modul

Figure 31. Input form of camera matrix and non-linear correction The second modul is Camera Calibration modul. This modul is used to save calibration parameters resulted by camera calibration process. On the left-side, there are input forms of camera matrix for camera 1 and camera 2. Each input form consists of 12 camera matrix elements, Save button (button #1), and Clear button (button #2) as shown by Figure 31. On the right-side, there are input forms of non-linear correction for camera 1 and camera 2. Each input forms consists of 8 non-linear correction elements, Save Parameter button (button #3), and Correction button (button #4). Save button (button #1) is used to save camera matrix elements in a file named Mmatrix1.txt located inside C:\Program Files\ETKMITL. Similarly for camera 2, camera matrix elements are saved in a file named Mmatrix2.txt inside the same folder. Clear button (button #2) is used to give zero value to the input form. Save Parameter button (button #3) is used to save non-linear correction elements in a file named result_calib1.txt inside C:\Program Files\ETKMITL folder. Similarly for camera 2, non-linear correction elements are saved in a file named result_calib2.txt inside the same folder. Correction button (button #4) is used to show the corrected image after non-linear correction is included (lens distortion is removed). Figure 32 (a) and Figure 32 (b) show an image before correction and after correction, respectively. Notice the difference between the uncorrected image and corrected image below.

20

(b) (a) Figure 32. (a) Uncorrected image (distorted image) and (b) corrected image (undistorted image) c. Tracking and Visualization Modul

Figure 33. Preview of Tracking and Visualization modul The third modul is Tracking and Visualization modul. This modul consists of several panels, such as Record Video, Eye Tracking, OpenGL Visualization, and OpenGL Visual Stimulus as shown in Figure 33. Each panel has its own functionalities regarding to tracking activities. Record Video panel is used to create offline recording of observed eye movements. It consists of Start button and Stop button which initiates and finalizes the recording, respectively. The recorded video from camera 1 and camera 2 are saved as files named camera1.avi and camera2.avi inside C:\Program Files\ETKMITLvideo.

21

Figure 34. Select Input box for eye tracking The Eye Tracking panel is used to activate eye tracking algorithm. You can select the tracking input from Live Camera (online tracking) or Video File (offline tracking) by pressing the Select Input box on the top of Eye Tracking panel. Start button is used to initiate the tracking algorithm while Stop button is used to end the tracking algorithm. Save Ref. Point button is used to start measuring horizontal, vertical, and torsional eye positions from reference position. Below those three buttons, there are three options: Remove Lens Distortion, Horizontal and Vertical Angles, and Torsional Angle. The Remove Lens Distortion option is checked to ensure that the captured image or video is free from lens distortion. The Horizontal and Vertical Angles option is checked when you want to measure horizontal and vertical positions of the eye. Note that in order to run the eye tracking algorithm, at least you have to activate the second option. The third option is activated when complete 3D measurement is conducted, resulting horizontal, vertical, and torsional positions of the eye. To measure torsional position, you can either use horizontal marker or vertical marker by checking the radio buttons in Iris Marker Option. Horizontal marker means the marker is located horizontally to the center of pupil in initia condition while vertical marker means the marker is located vertically to the center of pupil. Figure 35 (a) and Figure 35 (b) show the usage of horizontal marker and vertical marker, respectively.

(a) (b) Figure 35. (a) Horizontal marker and (b) vertical marker

22

(a)

(b)

(c) (d) Figure 36. (a) default position; (b) rotated horizontally; (c) rotated vertically; (c) zoomed out The OpenGL Visualization panel is used to plot 3D eye movements data into a 3D eye model generated by OpenGL. To plot the model, press Plot Model button and to close the model press Close Model button. The point-of-views of the 3D eyeball can be changed horizontally and vertically. You can also adjust the size of the ball by either zooming out or in the 3D model. Figure 36 shows several point-of-views of the 3D eyeball. Figure 36 (a) shows the default 3D eyeball position when it is launched. The horizontal and vertical rotation-based point-of-views are shown in Figure 36 (b) and Figure 36 (c), respectively. The scaling of the 3D eyeball about Z-axis produces zoom in and zoom out effects (Figure 36 (d)). The zoom in effect enlarges the size of the eyeball while the zoom out effect shrinks the size of the eyeball.

Figure 37. OpenGL Visual Stimulus selection The fourth panel is OpenGL Visual Stimulus. This panel is generally used in research process in order to validate the horizontal and vertical tracking algorithm. In this panel, four

23

type of stimulus can be chosen, as shown in Figure 37. Start and stop buttons are used to initializes and finalizes the stimulus, accordingly.

V. Camera Calibration a. Calibrating the camera with chessboard pattern

Figure 38. 5 x 6 chessboard pattern used to calibrate the camera The camera calibration is a procedure to get numeric values of camera parameters, which are instrinsic and extrinsic parameters. Figure 38 shows a chessboard pattern used for camera calibration. Our chessboard pattern consists of 5 x 6 black and white squares. The tiny square size 3 mm x 3 mm was made to accommodate close-range camera calibration. The chessboard pattern was then attached on a thin rectangular glass to allow the operator held the pattern facing the cameras. The distance between the chessboard pattern and the dual cameras system was 45 mm. The origin of the world coordinate system was set at the top-most left chessboard corner. Figure 38 (inset) shows coordinates ( X , Y , Z ) of several chessboard points acting as known scene point in 3D object space where Z = 0. Next, you should get eight or more distinct positions of chessboard pattern captured by camera 1 and camera 2 simultaneously. To capture images from camera 1 and camera 2, use Capture All Cam button (see Figure 20 (b)). If n is the sequence of images captured by the camera, the captured images from camera 1 will be saved under naming convention as follows: image0.jpg, image2.jpg, image4.jpg, image6.jpg, . image(2n-2).jpg. In the other side, the captured images from camera 2 will be saved under naming convention as follows: image1.jpg, image3.jpg, image5.jpg, image7.jpg, . image(2n-1).jpg. Images from camera 1 will be saved inside C:\Program Files\ETKMITL\data1 folder while images from camera 2 will be saved inside C:\Program Files\ETKMITL\data2. b. Generating intrinsic and extrinsic, distortion parameters with GML and Matlab

24

After capturing eight different pictures of chessboard pattern from camera 1 and camera2, you can extract camera and distortion parameters using GML Camera Calibration Toolbox and Matlab. GML Camera Calibration Toolbox is a free, open source, and OpenCV based camera calibration software running on Windows XP Operating System. GML toolbox can be downloaded here: http://graphics.cs.msu.ru/ru/science/research/calibration/cpp Figure 49 Figure 43 shows step by step calibrating the camera. In this case, camera 1 is used as an example. Similar procedure required towards camera 2. 1. First, open GML Camera Calibration Toolbox application 2. Create new project by clicking File > New Project as shown in Figure 39. Define the object size, 5 x 6 pattern with 3 mm square size. 3. Add all images from camera 1 by clicking Object Detection > Add Image 4. Detect all internal chessboard corners by clicking Object Detection > Detect All 5. Obtain camera and distortion parameters by clicking Calibration > Calibrate. The calibration result is shown in Figure 41. 6. Save the intrinsic parameters by clicking Calibration > Export Calibration Data > Intrinsic Camera Parameters and extrinsic parameters by clicking Calibration > Export Calibration Data > Extrinsic Camera Parameters (Format II). Save all data inside C:\Program Files\ETKMITL\data1. You will get one intrinsic parametes file and eight extrinsic parameters files in *.txt format. 7. Compute M Matrix using Matlab. You should create a file named Mmatrix.m in the same folder with intrinsic and extrinsic files. The matlab code is: %%find M matrix dir load Intrinsic.txt k=Intrinsic; load Extrinsic1.txt M=k*[eye(3,3) zeros(3,1)]*Extrinsic1 8. After getting M Matrix, copy each elements to the M Matrix Cam 1 panel, as shown in Figure 30 and Figure 31, and save the values by clicking Save button. 9. Copy the focal length value (Fx and Fy) and principal point (Px and Py) , and distortion parameters to non-linear correction panel for camera 1, and save the values by clicking Save Parameter button. 10. Do similar steps for camera 2. Save the calibration results inside C:\Program Files\ETKMITL\data2

25

Figure 39. Create new calibration project

Figure 40. Detect all internal chessboard corners

26

Figure 41. Calibrating camera 1

Figure 42. Saving calibration results

27

Figure 43. Compute M Matrix using Matlab Code VI. Initialization Before vertical, horizontal, and torsional angular eye positions are determined, physical parameters of the eye must be measured. The parameters consist of eye diameter and iris diameter. Both parameters are used to compute millimetres-to-pixels conversion factor . A professional vernier caliper (Mitutoyo, Kawaski, Japan) with accuracy 0.02mm is used to measure the physical parameters. Two pieces of band-aid are attached to the carbide-tipped of the caliper to prevent harming eye. Eyeball diameter of the patient is determined by carefully measure the distance of outer part (lateral canthi) and inner part (medial canthi) of eyelid aperture (palpebral aperture), as shown in Figure 44 (a). Iris diameter is measured by placing the caliper on the front of eye, as shown in Figure 44 (b). The measurement results are then entered to the convertion factor panel, as shown in Figure 20 (b). To measure iris diameter in pixels unit, capture the eye image from camera 1 and camera 2, subsequently. Measure the iris diameter by dragging the mouse from the left edge to the right edge of iris, as shown by Figure 45. Save the results of camera 1 and camera 2, respectively (see Figure 29). To convert millimeters to pixels unit, press Convert button on Camera Setting modul and then Save button (see Figure 20 (b)). To get the iris template, capture the eye image from camera 1, and crop the desired iris area. For more detail information, please refer to previous explanation (see Figure 28).

28

Figure 44. Measurement of physical parameters: (a) eye diameter; (b) iris diameter

(a)

(b) Figure 45. Measurement of iris diameter in pixels unit using captured images from (a) camera 1 and (b) camera 2

29

VII.

Create Video Recording

In order to create video recording of eye movements, do the following steps: 1. Ask the patient to wear the goggle properly. 2. Activate camera 1 and camera 2 by pressing Start button (see Figure 20 (a)). 3. Open Tracking and Visualization modul (see Figure 33). 4. Press Start button inside Record Video panel (see Figure 33). 5. Press Stop button inside Record Video panel to end the recording process (see Figure 33).

VIII.

Offline Tracking Mode

Offline tracking mode is performed by using recorded video file. The procedures are explained as follows: 1. Open Tracking and Visualization modul (see Figure 33). 2. Select input box for Eye Tracking and choose Video File (see Figure 34). 3. Check the options in the checkbox, such as Remove Lens Distortion, Horizontal and Vertical Angles, and Torsional Angle (see Figure 33). 4. Select the iris marker type by choosing horizontal or vertical marker (see Figure 33). 5. Press Start button (see Figure 33). 6. Press Save Ref. Point to start tracking the eye from reference point (see Figure 33).

30

IX. Online Tracking Mode Online tracking mode is performed by using real-time camera. The procedures are explained as follows: 1. Ask the patient to wear the goggle properly. 2. Start the camera from Camera Setting modul (see Figure 20(a)). 3. Open Tracking and Visualization modul (see Figure 33). 4. Select input box for Eye Tracking and choose Live Camera (see Figure 34). 5. Check the options in the checkbox, such as Remove Lens Distortion, Horizontal and Vertical Angles, and Torsional Angle (see Figure 33). 6. Select the iris marker type by choosing horizontal or vertical marker (see Figure 33). 7. Press Start button (see Figure 33). 8. Ask the patient to gaze on line of sight or defined reference point. 9. Press Save Ref. Point to start tracking the eye from reference point (see Figure 33).

X. Acknowledgement and Further Information This software was developed under collaborative research scheme between King Mongkuts Institute Technology Ladkrabang (THAILAND), Universitas Gadjah Mada (INDONESIA), and AUN/Seed-Net. Further information can be obtained in : 1. AUN/Seed-Net Official Website : http://seed-net.org/index.php 2. KMITL Biomedical Engineering Research Group : http://bmekmitl.org/ 3. Department of Electrical Engineering UGM : http://te.ugm.ac.id/ 4. Paper and Reference about 3D eye tracking : http://wibirama.com/publications/


31

You might also like