I am trying to get frame image to process while using new Android face detection mobile vision api.
So I have created Custom Detector to get Frame and tried to call getB
You have probably sorted this out already, but in case someone stumbles upon this question in the future, here's how I solved it:
As @pm0733464 points out, the default image format coming out of android.hardware.Camera
is NV21, and that is the one used by CameraSource.
This stackoverflow answer provides the answer:
YuvImage yuvimage=new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Although frame.getGrayscaleImageData()
suggests bitmap
will be a grayscale version of the original image, this is not the case, in my experience. In fact, the bitmap is identical to the one supplied to the SurfaceHolder
natively.