CIDetector give wrong position on facial features

半城伤御伤魂 提交于 2019-12-07 10:33:14

问题


Now i know that the coordinate system is messed up. I have tried reversing the view and imageView, nothing. I then tried to reverse the coordinates on the features and i still get the same problem. I know it detects the faces, eyes and mouth, but when i try to place the overlaying boxes from the samples codes, they are out of position (to be exact, they are on the right off-screen). Im stumped as to why this is happening.

Ill post some code because i know some of you guys like the specificity:

-(void)faceDetector
{
    // Load the picture for face detection
//    UIImageView* image = [[UIImageView alloc] initWithImage:mainImage];
    [self.imageView setImage:mainImage];
    [self.imageView setUserInteractionEnabled:YES];

    // Draw the face detection image
//    [self.view addSubview:self.imageView];

    // Execute the method used to markFaces in background
//    [self performSelectorInBackground:@selector(markFaces:) withObject:self.imageView];

    // flip image on y-axis to match coordinate system used by core image
//    [self.imageView setTransform:CGAffineTransformMakeScale(1, -1)];

    // flip the entire window to make everything right side up
//    [self.view setTransform:CGAffineTransformMakeScale(1, -1)];

//    [toolbar setTransform:CGAffineTransformMakeScale(1, -1)];
    [toolbar setFrame:CGRectMake(0, 0, 320, 44)];

    // Execute the method used to markFaces in background
    [self performSelectorInBackground:@selector(markFaces:) withObject:_imageView];
//    [self markFaces:self.imageView];
}

-(void)markFaces:(UIImageView *)facePicture
{
    // draw a CI image with the previously loaded face detection picture
    CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];

    // create a face detector - since speed is not an issue we'll use a high accuracy
    // detector
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                              context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];

//    CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
    CGAffineTransform transform = CGAffineTransformMakeScale(self.view.frame.size.width/mainImage.size.width, -self.view.frame.size.height/mainImage.size.height);
    transform = CGAffineTransformTranslate(transform, 0, -self.imageView.bounds.size.height);

    // create an array containing all the detected faces from the detector
    NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
    NSArray* features = [detector featuresInImage:image options:imageOptions];
//    NSArray* features = [detector featuresInImage:image];

    NSLog(@"Marking Faces: Count: %d", [features count]);

    // we'll iterate through every detected face.  CIFaceFeature provides us
    // with the width for the entire face, and the coordinates of each eye
    // and the mouth if detected.  Also provided are BOOL's for the eye's and
    // mouth so we can check if they already exist.
    for(CIFaceFeature* faceFeature in features)
    {


        // create a UIView using the bounds of the face
//        UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
        CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);

        // get the width of the face
//        CGFloat faceWidth = faceFeature.bounds.size.width;
        CGFloat faceWidth = faceRect.size.width;

        // create a UIView using the bounds of the face
        UIView *faceView = [[UIView alloc] initWithFrame:faceRect];

        // add a border around the newly created UIView
        faceView.layer.borderWidth = 1;
        faceView.layer.borderColor = [[UIColor redColor] CGColor];

        // add the new view to create a box around the face
        [self.imageView addSubview:faceView];
        NSLog(@"Face -> X: %f, Y: %f, W: %f, H: %f",faceRect.origin.x, faceRect.origin.y, faceRect.size.width, faceRect.size.height);

        if(faceFeature.hasLeftEyePosition)
        {

            // create a UIView with a size based on the width of the face
            CGPoint leftEye = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
            UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEye.x-faceWidth*0.15, leftEye.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
            // change the background color of the eye view
            [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
            // set the position of the leftEyeView based on the face
            [leftEyeView setCenter:leftEye];
            // round the corners
            leftEyeView.layer.cornerRadius = faceWidth*0.15;
            // add the view to the window
            [self.imageView addSubview:leftEyeView];
            NSLog(@"Has Left Eye -> X: %f, Y: %f",leftEye.x, leftEye.y);
        }

        if(faceFeature.hasRightEyePosition)
        {

            // create a UIView with a size based on the width of the face
            CGPoint rightEye = CGPointApplyAffineTransform(faceFeature.rightEyePosition, transform);
            UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(rightEye.x-faceWidth*0.15, rightEye.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
            // change the background color of the eye view
            [leftEye setBackgroundColor:[[UIColor yellowColor] colorWithAlphaComponent:0.3]];
            // set the position of the rightEyeView based on the face
            [leftEye setCenter:rightEye];
            // round the corners
            leftEye.layer.cornerRadius = faceWidth*0.15;
            // add the new view to the window
            [self.imageView addSubview:leftEye];
            NSLog(@"Has Right Eye -> X: %f, Y: %f", rightEye.x, rightEye.y);
        }

//        if(faceFeature.hasMouthPosition)
//        {
//            // create a UIView with a size based on the width of the face
//            UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
//            // change the background color for the mouth to green
//            [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
//            // set the position of the mouthView based on the face
//            [mouth setCenter:faceFeature.mouthPosition];
//            // round the corners
//            mouth.layer.cornerRadius = faceWidth*0.2;
//            // add the new view to the window
//            [self.imageView addSubview:mouth];
//        }
    }
}

I know the code segment is a little long but thats the main gist of it. They only other thing relevant to this is that I have a UIImagePickerController that gives the user the option to pick an existing image or take a new one. Then the image is set into the UIImageView of the screen to be displayed along with the various boxes and circles but no luck to show them :/

Any help would be appreciated. Thank~

Update:

Ive added a photo of what it does now so you guys can have an idea, ive applied the new scaling which works a little better but nowhere near what i want it to do.


回答1:


Just use the code from Apple's SquareCam app. It aligns the square correctly in any orientation for both the front and rear cameras. Interpolate along the faceRect for the correct eye and mouth positions. Note: you do have to swap the x position with the y position from the face feature. Not sure why exactly you have to do the swap but that gives you the correct positions.




回答2:


Your transform is missing a scale unless your image view has the exact same size as your image. Start with

   CGAffineTransformMakeScale( viewWidth / imageWidth, - viewHeight / imageHeight )

where viewWidth and viewHeight is the size of your view and imageWidth and imageHeight is the size of your image.




回答3:


So after playing around and with the help of @Sven, i figured it out.

CGAffineTransform transform = CGAffineTransformMakeScale(self.imageView.bounds.size.width/mainImage.size.width, -self.imageView.bounds.size.height/mainImage.size.height);
    transform = CGAffineTransformRotate(transform, degreesToRadians(270));

I had to adjust the transform to be scaled b.w the image size and the size of the imageview then for some reason i had to rotate it, but it works perfectly now



来源:https://stackoverflow.com/questions/16549410/cidetector-give-wrong-position-on-facial-features

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!