eye-tracking

How to track eyes using Kinect SDK?

ぐ巨炮叔叔 提交于 2019-12-09 10:29:10
问题 The requirement is to define a rectangle around each eye in 3D space. There should be a way to track eyes using the Microsoft Kinect SDK. According to this The Face Tracking SDK uses the Kinect coordinate system to output its 3D tracking results. The origin is located at the camera’s optical center (sensor), Z axis is pointing towards a user, Y axis is pointing up. The measurement units are meters for translation and degrees for rotation angles. Adding ... Debug3DShape("OuterCornerOfRightEye"

convert eye-tracking .edf file to ASC/CSV format

天大地大妈咪最大 提交于 2019-12-08 05:57:27
问题 I have a recording of tracking data in .edf format (SR-RESEARCH eyelink). I want to convert it to ASC/CSV format in python. I have the GUI application but I want to do it programmatically (in Python). I found the package pyEDFlib but couldn't find an example to how convert the eye-tracking .edf file to .asc or .csv. What will the best best way to do it? Thanks 回答1: If I trust the page here: http://pyedflib.readthedocs.io/en/latest, you can run through all the signals in the file this way:

General algorithm to do eyes tracking

本秂侑毒 提交于 2019-12-07 12:12:49
问题 So, the think is i want to build a software who can track the possition of the pupil, but i cant find on internet a mathematical aproch to the problem. I want to see some examples of how to calculate the position of the pupil. Thanks! 回答1: I think the most common way involves illuminating the subject with a point light source and using the bright specular highlight on the cornea to locate the eyeball. Then the location of the pupil relative to the highlight gives you the direction. To

Setting a reference number and comparing that to other data in textfile

扶醉桌前 提交于 2019-12-06 09:17:21
问题 The project is based on Eye Tracker. Let me brief the idea behind the project to understand my problem better. I have the hardware of Tobii C eye tracker. This eye tracker will be able to give out coordinates of the X, Y of where I am looking at. But this device is very sensitive. When I look at 1 point, the eye tracker will send out many different data of coordinates but within ± 100 range which I found out. Even though you are staring at 1 point, your eyes keep moving, therefore giving out

open eye and closed eye in android by Android eye detection and tracking with OpenCV

人盡茶涼 提交于 2019-12-06 09:12:24
问题 i made application eye detecting by following this link link and it work how can i detect the eye is opened or closed ? is there library in android to detect closed or opened 回答1: I've no idea whether there is any library for that, but using technique descirbed in article Eye-blink detection system for human–computer interaction by Aleksandra Królak and Paweł Strumiłło (you can download it here and here and here is some simplified version) in my opinion is a good option. Generally this

Speeding up vectorized eye-tracking algorithm in numpy

不问归期 提交于 2019-12-06 07:55:26
问题 I'm trying to implement Fabian Timm's eye-tracking algorithm [http://www.inb.uni-luebeck.de/publikationen/pdfs/TiBa11b.pdf] (found here: [http://thume.ca/projects/2012/11/04/simple-accurate-eye-center-tracking-in-opencv/]) in numpy and OpenCV and I've hit a snag. I think I've vectorized my implementation decently enough, but it's still not fast enough to run in real time and it doesn't detect pupils with as much accuracy as I had hoped. This is my first time using numpy, so I'm not sure what

General algorithm to do eyes tracking

[亡魂溺海] 提交于 2019-12-06 05:07:54
So, the think is i want to build a software who can track the possition of the pupil, but i cant find on internet a mathematical aproch to the problem. I want to see some examples of how to calculate the position of the pupil. Thanks! I think the most common way involves illuminating the subject with a point light source and using the bright specular highlight on the cornea to locate the eyeball. Then the location of the pupil relative to the highlight gives you the direction. To simplify the image processing you use IR light and an IR monochrome camera. To work out the math try sketching it

Setting a reference number and comparing that to other data in textfile

那年仲夏 提交于 2019-12-04 17:20:24
The project is based on Eye Tracker. Let me brief the idea behind the project to understand my problem better. I have the hardware of Tobii C eye tracker. This eye tracker will be able to give out coordinates of the X, Y of where I am looking at. But this device is very sensitive. When I look at 1 point, the eye tracker will send out many different data of coordinates but within ± 100 range which I found out. Even though you are staring at 1 point, your eyes keep moving, therefore giving out many data. This many data (float numbers) are then saved in a text file. Now I only need 1 data (X

open eye and closed eye in android by Android eye detection and tracking with OpenCV

限于喜欢 提交于 2019-12-04 16:36:01
i made application eye detecting by following this link link and it work how can i detect the eye is opened or closed ? is there library in android to detect closed or opened I've no idea whether there is any library for that, but using technique descirbed in article Eye-blink detection system for human–computer interaction by Aleksandra Królak and Paweł Strumiłło (you can download it here and here and here is some simplified version ) in my opinion is a good option. Generally this technique is quite simple: Find eye (or both eyes). Remember this part of image as a template. In next frame use

Speeding up vectorized eye-tracking algorithm in numpy

允我心安 提交于 2019-12-04 14:27:53
I'm trying to implement Fabian Timm's eye-tracking algorithm [ http://www.inb.uni-luebeck.de/publikationen/pdfs/TiBa11b.pdf] (found here: [ http://thume.ca/projects/2012/11/04/simple-accurate-eye-center-tracking-in-opencv/] ) in numpy and OpenCV and I've hit a snag. I think I've vectorized my implementation decently enough, but it's still not fast enough to run in real time and it doesn't detect pupils with as much accuracy as I had hoped. This is my first time using numpy, so I'm not sure what I've done wrong. def find_pupil(eye): eye_len = np.arange(eye.shape[0]) xx,yy = np.meshgrid(eye_len