iOS11 vision framework mapping all face landmarks

无人久伴 提交于 2019-12-04 08:11:19

问题


I am playing with vision framework and getting all landmark points with this code:

if let allFaceLandmarks = landmarks.allPoints {
    print(allFaceLandmarks)
}

But cant find mapping for these points. For example index numbers for right eye.

Looking for something the same as this, but for Vision framework instead.


回答1:


I have no clue why apple doesn't provide a graphic of this. It seems like it would be super helpful information to give people in the docs. At any rate, I was able to read the allPoints property of the observation and draw them out with numbers. I'm not really sure about the difference between nose and nose crest. You can probably draw them out and see...

  • Right eyebrow = 0 - 3
  • Left eyebrow = 4 - 7
  • Right eye contour = 8 - 15
  • Left eye contour = 16 - 23
  • Outer lips = 24 - 33
  • Inner lips = 34 - 39
  • Face Contour = 40 - 50
  • Nose and Nose Crest = 51 - 59
  • Meidan Line = 60 - 62
  • Right Pupil = 63
  • Left Pupil = 64

Here's a pic that hopefully helps!




回答2:


lefty eyebrow   : 1~4
right eyebrow   : 5~8
left eye        : 9~16
right eye       : 17~24
outer mouth     : 25~34
inner mouth     : 35~40
left contour    : 41~45
chin            : 46
right contour   : 47~51
nose outline    : 52~60
nose crest      : 61~63
left tulip      : 64
right tulip     : 65




回答3:


This post was super helpful for me, so I figured I would update it for iOS 13 (the original scope of the question is iOS 11). Starting with iOS 13, you will get a different set of points (VNDetectFaceLandmarksRequestRevision3) unless you manually specify the VNDetectFaceLandmarksRequestRevision2 revision. The revision parameter is only available in iOS12, so you need something like:

let faceLandmarksRequest = VNDetectFaceLandmarksRequest(completionHandler: self.myFaceFunction)

if #available(iOS 12.0, *) {
  // Force the revision to 2 (68-points) even on iOS 13 or greater 
  // when VNDetectFaceLandmarksRequestRevision3 is available. 
  faceLandmarksRequest.revision = 2
}

When I was updating my app talkr to iOS 13, I couldn't find a reference image for the new points like the one in this post, so I thought I would generate one. I hope it helps someone!




回答4:


I hope you are already using Vision API VNDetectFaceLandmarksRequest class to detect the facial-feature.

Each landmark we find is type of VNFaceLandmarks2D

var landmarks: VNFaceLandmarks2D? { get }

If you check the documentation for VNFaceLandmarks2D class instance properties, we can find out all details about the detected face. Below are the values which we can get it from each landmark.

  • allPoints
  • faceContour
  • innerLips
  • leftEye
  • leftEyebrow
  • leftPubil
  • medianLine
  • nose
  • noseCrest
  • outerLips
  • rightEye
  • rightEyebrow
  • rightPubil

all of them are type of [VNFaceLandmarkRegion2D][2]



来源:https://stackoverflow.com/questions/45298639/ios11-vision-framework-mapping-all-face-landmarks

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!