track eye pupil in a video

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-31 08:13:14

问题


I am working on a project aimed to track eye pupil. For this I have made a head-mounted system that captures the images of the eye. Completed with the hardware portion I am struck in software part. I am using opencv. Please let me know what would be the most efficient way to track the pupil. Houghcircles didn't performing well.

After that I have also tried with HSV filter and here is the code and link to screenshot of the raw-image and processed one. Please help me to resolve this issue. The link also contains video of eye pupil that I am using in this code.

https://picasaweb.google.com/118169326982637604860/16November2011?authuser=0&authkey=Gv1sRgCPKwwrGTyvX1Aw&feat=directlink

Code:

include "cv.h"

include"highgui.h"

IplImage* GetThresholdedImage(IplImage* img)
{

    IplImage *imgHSV=cvCreateImage(cvGetSize(img),8,3);
    cvCvtColor(img,imgHSV,CV_BGR2HSV);
    IplImage *imgThresh=cvCreateImage(cvGetSize(img),8,1);
    cvInRangeS(imgHSV,cvScalar(0, 84, 0, 0),cvScalar(179, 256, 11, 0),imgThresh);
    cvReleaseImage(&imgHSV);
    return imgThresh;
}

void main(int *argv,char **argc)
{

    IplImage *imgScribble= NULL;
    char c=0;
    CvCapture *capture;
    capture=cvCreateFileCapture("main.avi");

    if(!capture)
    {
        printf("Camera could not be initialized");
        exit(0);
    }
    cvNamedWindow("Simple");
    cvNamedWindow("Thresholded");

    while(c!=32)
    {
        IplImage *img=0;
        img=cvQueryFrame(capture);
        if(!img)
            break;
        if(imgScribble==NULL)
            imgScribble=cvCreateImage(cvGetSize(img),8,3);

        IplImage *timg=GetThresholdedImage(img);
        CvMoments *moments=(CvMoments*)malloc(sizeof(CvMoments));
        cvMoments(timg,moments,1);

        double moment10 = cvGetSpatialMoment(moments, 1, 0);
        double moment01 = cvGetSpatialMoment(moments, 0, 1);
        double area = cvGetCentralMoment(moments, 0, 0);

        static int posX = 0;
        static int posY = 0;

        int lastX = posX;
        int lastY = posY;

        posX = moment10/area;
        posY = moment01/area;
         // Print it out for debugging purposes
        printf("position (%d,%d)\n", posX, posY);
        // We want to draw a line only if its a valid position
        if(lastX>0 && lastY>0 && posX>0 && posY>0)
        {
            // Draw a yellow line from the previous point to the current point
            cvLine(imgScribble, cvPoint(posX, posY), cvPoint(lastX, lastY), cvScalar(0,255,255), 5);
        }
        // Add the scribbling image and the frame...

        cvAdd(img, imgScribble, img);

        cvShowImage("Simple",img);
        cvShowImage("Thresholded",timg);
        c=cvWaitKey(3);
        cvReleaseImage(&timg);
        delete moments;

    }
    //cvReleaseImage(&img);
    cvDestroyWindow("Simple");
    cvDestroyWindow("Thresholded");

}

I am able to track the eye and find the center coordinates of pupil precisely.

First I thresholded the image taken by the head mounted camera. After that I have used contour finding algorithm then I find the centroid of all the contours. This gives me the center coordinates of eye pupil, this method is working fine in real time and also detecting eye blinking with very good accuracy.

Now, my aim is to embed this feature into a game(a racing game). In which If I look to left/right then the car moves left/right and If I blink the car slows down. How could I proceed now??? Would I need a game engine to do that?

I heard of some open source game engines compatible with visual studio 2010(unity etc.). Is it feasible??? If yes, how should I proceed ?


回答1:


I am one of the developers of SimpleCV. We maintain an open-source python library for computer vision. You can download it at SimpleCV.org. SimpleCV is great for solving these types of problems by hacking on the command line. I was able to extract the pupil in only a couple lines of code. Here you go:

img = Image("eye4.jpg") # load the image
bm = BlobMaker() # create the blob extractor
# invert the image so the pupil is white, threshold the image, and invert again
# and then extract the information from the image
blobs = bm.extractFromBinary(img.invert().binarize(thresh=240).invert(),img)

if(len(blobs)>0): # if we got a blob
    blobs[0].draw() # the zeroth blob is the largest blob - draw it
    locationStr = "("+str(blobs[0].x)+","+str(blobs[0].y)+")"
    # write the blob's centroid to the image
    img.dl().text(locationStr,(0,0),color=Color.RED)
    # save the image
    img.save("eye4pupil.png")
    # and show us the result.
    img.show()

Here are the results.

So your next steps are to use some sort of tracker, like a Kalmann filter, to track the pupil robustly. You may want to model the eye as a sphere and track the pupil's centroid in sphereical coordinates (i.e. theta and phi). You will also want to write a bit of code to detect blink events so the system doesn't go all wonky when the user blinks. I suggest using a canny edge detector to find the largest horizontal lines in the image and assuming those are the eye lids. I hope this helps and please let us know how your work progresses.




回答2:


It all depends on how good your system must be. If it's a 2-months university project, that's ok to find and track some blobs or to use a ready-made solution, as Kscottz recommended.

But if you aim to have a more serious system, you must go deeper.

An approach I recommend you is to detect the face interest points. A good example is Active Appearance Models, which seems to be the best at tracking faces

http://www2.imm.dtu.dk/~aam/

and

http://www.youtube.com/watch?v=M1iu__viJN8

It requires you a solid understanding of computer vision algorithms, good programming skills, and some work. But the results will be worth the effort.

And do not be fooled by the fact that the demos show whole-face tracking. You can train it to track anything: hands, eyes, flowers or leaves, etc.

(Before starting with AAM, you may want to read more about other face-tracking algorithms. They may be better for you)




回答3:


This is my solution, I am able to track the eye and find the center coordinates of pupil precisely.

First I thresholded the image taken by the head mounted camera. After that I have used contour finding algorithm then I find the centroid of all the contours. This gives me the center coordinates of eye pupil, this method is working fine in real time and also detecting eye blinking with very good accuracy.



来源:https://stackoverflow.com/questions/8145725/track-eye-pupil-in-a-video

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!