Why doesn't the decoder of MediaCodec output a unified YUV format(like YUV420P)?

雨燕双飞 提交于 2019-12-07 08:02:29

问题


"The MediaCodec decoders may produce data in ByteBuffers using one of the above formats or in a proprietary format. For example, devices based on Qualcomm SoCs commonly use OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar32m (#2141391876 / 0x7FA30C04)."

This make it difficult even not possible to deal with the output buffer.Why not use a unified YUV format?And why there are so many YUV color formats?

@fadden,I find it possible to decode to Surface and get the RGB buffer(like http://bigflake.com/mediacodec/ExtractMpegFramesTest.java.txt), Can I transfer the RGB buffer to YUV format and then encode it?

And,fadden,I tried to use API 18+ and came across some problems.I refered to the ContinuousCaptureActivity and ExtractMpegFramesTest code. In ContinuousCaptureActivity:

    mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
    mDisplaySurface = new WindowSurface(mEglCore, holder.getSurface(), false);
    mDisplaySurface.makeCurrent();

    mFullFrameBlit = new FullFrameRect(
            new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
    mTextureId = mFullFrameBlit.createTextureObject();
    mCameraTexture = new SurfaceTexture(mTextureId);
    mCameraTexture.setOnFrameAvailableListener(this);
    mCamera.setPreviewTexture(mCameraTexture);

The FullFrameRect creates a SurfaceTexture and it is set to the camera preview texture.

But in ExtractMpegFramesTest, a CodecOutputSurface is used and it also creates a texture.How can I use the CodecOutputSurface and FullFrameRect together?(one supplies surface to receive the decoder output and one rescale and render to the encoder input surface.)


回答1:


Let me try to answer the 'why' part. With your kind permission, I will change the order of the questions.

And why there are so many YUV color formats?

YUV is only one of many color spaces used to represent visual information. For many technical and historical reasons, this color space is most popular for photographical data (including video). Wikipedia claims that YUV was invented when television began to transform from BW to color. Back then, these were analogue signals. Later, different engineers in different corporations and countries began to independently invent the ways to store this YUV data in digital format. No wonder that they did not come up with one format.

Furthermore, the YUV formats differ in the volume of chroma information they store. It's quite natural that YUV 420, 422, and 444 all have right to exist, giving different compromises between precision and size.

Finally, some of the differences in YUV formats are related to physical layout of the pixels, and are optimized for different optical sensors.

Which brings us to the first part of your question:

Why not use a unified YUV format?

Transfer of photo information from the optical sensor to computer (smartphone) memory is a technical challenge. When we speak about a many-megapixel live high-speed video stream, the bandwidth limitations and electronic noise become important. Some YUV formats, like uyvy or OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar32m, are optimized to reduce electron congestion on the way from optical sensor to byte buffer.

These formats may have significant advantage if used on the proper integrated circuit, or no advantage at all if used on a different type of hardware.

Same is true for hardware codecs. Different implementations of h264 decoder may take advantage of cache locality for different interlaced YUV formats.



来源:https://stackoverflow.com/questions/26463556/why-doesnt-the-decoder-of-mediacodec-output-a-unified-yuv-formatlike-yuv420p

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!