问题
I am trying to get frame image to process while using new Android face detection mobile vision api.
So I have created Custom Detector to get Frame and tried to call getBitmap() method but it is null so I accessed grayscale data of frame. Is there a way to create bitmap from it or similiar image holder class?
public class CustomFaceDetector extends Detector<Face> {
private Detector<Face> mDelegate;
public CustomFaceDetector(Detector<Face> delegate) {
mDelegate = delegate;
}
public SparseArray<Face> detect(Frame frame) {
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
// Byte array to Bitmap here
return mDelegate.detect(frame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}}
回答1:
You have probably sorted this out already, but in case someone stumbles upon this question in the future, here's how I solved it:
As @pm0733464 points out, the default image format coming out of android.hardware.Camera
is NV21, and that is the one used by CameraSource.
This stackoverflow answer provides the answer:
YuvImage yuvimage=new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Although frame.getGrayscaleImageData()
suggests bitmap
will be a grayscale version of the original image, this is not the case, in my experience. In fact, the bitmap is identical to the one supplied to the SurfaceHolder
natively.
回答2:
Just adding in a few extras to set a box of 300px on each side for the detection area. By the way if you don'y put in the frame height and width in getGrayscaleImageData() from the metadata you get weird corrupted bitmaps out.
public SparseArray<Barcode> detect(Frame frame) {
// *** crop the frame here
int boxx = 300;
int width = frame.getMetadata().getWidth();
int height = frame.getMetadata().getHeight();
int ay = (width/2) + (boxx/2);
int by = (width/2) - (boxx/2);
int ax = (height/2) + (boxx/2);
int bx = (height/2) - (boxx/2);
YuvImage yuvimage=new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(by, bx, ay, ax), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Frame outputFrame = new Frame.Builder().setBitmap(bitmap).build();
return mDelegate.detect(outputFrame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
}
来源:https://stackoverflow.com/questions/32412197/how-to-create-bitmap-from-grayscaled-byte-buffer-image