Android Vision Face Detection with Video Stream

做~自己de王妃 提交于 2019-12-10 19:14:41

问题


I am trying to integrate the face detection api in a video stream I am receiving from a parrot bebop drone.

The stream is decoded with the MediaCodec class (http://developer.android.com/reference/android/media/MediaCodec.html) and this is working fine. Rather than rendering the decoded frame data to a surface view, I can successfully access the ByteBuffer with the decoded frame data from the decoder.

I can also access the decoded image objects (class https://developer.android.com/reference/android/media/Image.html) from the decoder, they have a timestamp and I get the following infos:

  • width: 640
  • height: 368
  • format: YUV_420_888

First thing I tried to do was generating Frame objects for the vision api (com/google/android/gms/vision/Frame) via the Framebuilder (android/gms/vision/Frame.Builder)

...
 ByteBuffer decodedOutputByteBufferFrame = mediaCodec.getOutputBuffer(outIndex);
Image image =  mediaCodec.getOutputImage(outIndex);
...
decodedOutputByteBufferFrame.position(bufferInfo.offset);
decodedOutputByteBufferFrame.limit(bufferInfo.offset+bufferInfo.size);
frameBuilder.setImageData(decodedOutputByteBufferFrame, 640, 368,ImageFormat.YV12);
frameBuilder.setTimestampMillis(image.getTimestamp());
Frame googleVisFrame = frameBuilder.build();

This codes does not give me any error and the googleVisFrame object is not null, but when I call googleVis.getBitmap(), I get null. Subsequently, a Facedetection does not work (I suppose because there's an issue with my vision frame objects...)

Even if this would work, I am not sure how to handle the videostream with the vision api as all the code I find demonstrates the use with the internal camera.

If you could point me to the right direction, I would be very thankfull.

来源:https://stackoverflow.com/questions/33173525/android-vision-face-detection-with-video-stream

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!