Track Eye Pupil Position with Webcam, OpenCV, and Python

痞子三分冷 提交于 2021-02-07 02:32:18

问题


I am trying to build a robot that I can control with basic eye movements. I am pointing a webcam at my face, and depending on the position of my pupil, the robot would move a certain way. If the pupil is in the top, bottom, left corner, right corner of the eye the robot would move forwards, backwards, left, right respectively.

My original plan was to use an eye haar cascade to find my left eye. I would then use houghcircle on the eye region to find the center of the pupil. I would determine where the pupil was in the eye by finding the distance from the center of the houghcircle to the borders of the general eye region.

So for the first part of my code, I'm hoping to be able to track the center of the eye pupil, as seen in this video. https://youtu.be/aGmGyFLQAFM?t=38

But when I run my code, it cannot consistently find the center of the pupil. The houghcircle is often drawn in the wrong area. How can I make my program consistently find the center of the pupil, even when the eye moves?

Is it possible/better/easier for me to tell my program where the pupil is at the beginning? I've looked at some other eye tracking methods, but I cannot form a general algorithm. If anyone could help form one, that would be much appreciated! https://arxiv.org/ftp/arxiv/papers/1202/1202.6517.pdf

import numpy as np
import cv2

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_righteye_2splits.xml')

#number signifies camera
cap = cv2.VideoCapture(0)

while 1:
    ret, img = cap.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    #faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    eyes = eye_cascade.detectMultiScale(gray)
    for (ex,ey,ew,eh) in eyes:
        cv2.rectangle(img,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
        roi_gray2 = gray[ey:ey+eh, ex:ex+ew]
        roi_color2 = img[ey:ey+eh, ex:ex+ew]
        circles = cv2.HoughCircles(roi_gray2,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
        try:
            for i in circles[0,:]:
                # draw the outer circle
                cv2.circle(roi_color2,(i[0],i[1]),i[2],(255,255,255),2)
                print("drawing circle")
                # draw the center of the circle
                cv2.circle(roi_color2,(i[0],i[1]),2,(255,255,255),3)
        except Exception as e:
            print e
    cv2.imshow('img',img)
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release()
cv2.destroyAllWindows()

回答1:


I can see two alternatives, from some work that I did before:

  1. Train a Haar detector to detect the eyeball, using training images with the center of the pupil at the center and the width of the eyeball as width. I found this better than using Hough circles or just the original eye detector of OpenCV (the one used in your code).

  2. Use Dlib's face landmark points to estimate the eye region. Then use the contrast caused by the white and dark regions of the eyeball, together with contours, to estimate the center of the pupil. This produced much better results.




回答2:


Just replace line where you created HoughCircles by this:

circles = cv2.HoughCircles(roi_gray2,cv2.HOUGH_GRADIENT,1,200,param1=200,param2=1,minRadius=0,maxRadius=0)

I just changed a couple of parameters and it gives me more accuracy.

Detailed information about parameters here.



来源:https://stackoverflow.com/questions/45789549/track-eye-pupil-position-with-webcam-opencv-and-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!