Access violation in native code with hardware accelerated Android MediaCodec decoder

后端 未结 2 1958
陌清茗
陌清茗 2021-02-10 01:55

I aim to use the Android MediaCodec for decoding a video stream, then use the output images for further image processing in native code.

Platform: ASUS tf700t android 4.

相关标签:
2条回答
  • 2021-02-10 02:53

    I use mediacodec api on nexus 4 and get the output color format of QOMX_COLOR_FormatYUV420PackedSemiPlanar64x32Tile2m8ka. I think this format is a kind of hardware format and only can be rendered by hardware rendering. Interestingly, I find that when I use null and actual Surface to configure the surface for MediaCodec, the output buffer length will be change to a actual value and 0 respectively. I don't know why. I think you can do some experiments on different devices for more results. About hardware accelerating you can see http://www.saschahlusiak.de/2012/10/hardware-acceleration-on-sgs2-with-android-4-0/

    0 讨论(0)
  • 2021-02-10 02:58

    If you configure an output Surface, the decoded data is written to a graphic buffer that can be used as an OpenGL ES texture (via the "external texture" extension). The various bits of hardware get to hand data around in a format they like, and the CPU doesn't have to copy the data.

    If you don't configure a Surface, the output goes into a java.nio.ByteBuffer. There's at least one buffer copy to get the data from the MediaCodec-allocated buffer to your ByteByffer, and presumably another copy to get the data back out into your JNI code. I expect what you're seeing is the overhead cost rather than software decoding cost.

    You might be able to improve matters by sending the output to a SurfaceTexture, rending into an FBO or pbuffer, and then using glReadPixels to extract the data. If you read into a "direct" ByteBuffer or call glReadPixels from native code, you reduce your JNI overhead. The down side to this approach is that your data will be in RGB rather than YCbCr. (OTOH, if your desired transformations can be expressed in a GLES 2.0 fragment shader, you can get the GPU to do the work instead of the CPU.)

    As noted in another answer, the decoders on different devices output ByteBuffer data in different formats, so interpreting the data in software may not be viable if portability is important to you.

    Edit: Grafika now has an example of using the GPU to do image processing. You can see a demo video here.

    0 讨论(0)
提交回复
热议问题