Modify ExtractMpegFramesTest example to render decoded output on screen

蹲街弑〆低调 提交于 2019-12-23 03:44:09

问题


I'm trying to modify ExtractMpegFramesTest to do the rendering on screen and still use glReadPixels to extract the frames.

I copied the relevant code for extracting the frames from ExtractMpegFramesTest (the CodecOutputSurface and STextureRender classes) and the frame extraction works as expected when rendered off screen.

I have a TextureView with a SurfaceTextureListener and when I receive onSurfaceTextureAvailable I get the SurfaceTexture and start the decoding process. I pass this SurfaceTexture to CodecOutputSurface but it doesn't work.

I'm not sure if this is relevant but onSurfaceTextureAvailable and the SurfaceTexture are received on the main thread and all the decoding (including CodecOutputSurface constructor call) is done on a different thread.

I tried to work with suggestions from here and here but I can't get it to work.

I see this in the logs:

E/BufferQueueProducer: [SurfaceTexture-0-11068-20] connect(P): already connected (cur=1 req=3)
I/MediaCodec: native window already connected. Assuming no change of surface
E/MediaCodec: configure failed with err 0xffffffea, resetting...

I made this modifications to the ExtractMpegFramesTest eglSetup method:

private void eglSetup() {
    mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
    if (mEGLDisplay == EGL14.EGL_NO_DISPLAY) {
        throw new RuntimeException("unable to get EGL14 display");
    }
    int[] version = new int[2];
    if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) {
        mEGLDisplay = null;
        throw new RuntimeException("unable to initialize EGL14");
    }

    int[] attribList = {
                    EGL14.EGL_RED_SIZE, 8,
                    EGL14.EGL_GREEN_SIZE, 8,
                    EGL14.EGL_BLUE_SIZE, 8,
                    EGL14.EGL_ALPHA_SIZE, 8,
                    EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
                    EGL14.EGL_SURFACE_TYPE, EGL14.EGL_WINDOW_BIT, // tell it to use a window
                    EGL14.EGL_NONE
    };
    EGLConfig[] configs = new EGLConfig[1];
    int[] numConfigs = new int[1];
    if (!EGL14.eglChooseConfig(mEGLDisplay, attribList, 0, configs, 0, configs.length,
                    numConfigs, 0)) {
        throw new RuntimeException("unable to find RGB888+recordable ES2 EGL config");
    }

    // Configure context for OpenGL ES 2.0.
    int[] attrib_list = {
                    EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
                    EGL14.EGL_NONE
    };

    mEGLContext = EGL14.eglCreateContext(mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
                    attrib_list, 0);
    checkEglError("eglCreateContext");
    if (mEGLContext == null) {
        throw new RuntimeException("null context");
    }

    int[] surfaceAttribs = {
                    EGL14.EGL_RENDER_BUFFER, EGL14.EGL_SINGLE_BUFFER,
                    EGL14.EGL_NONE
    };

    mSurfaceTexture.setOnFrameAvailableListener(this);

    mSurface = new Surface(mSurfaceTexture);


    mPixelBuf = ByteBuffer.allocateDirect(mWidth * mHeight * 4);
    mPixelBuf.order(ByteOrder.LITTLE_ENDIAN);

    mEGLSurface = EGL14.eglCreateWindowSurface(mEGLDisplay, configs[0], mSurface,
                    surfaceAttribs, 0); // create window surface instead of eglCreatePbufferSurface
    checkEglError("eglCreateWindowSurface");
    if (mEGLSurface == null) {
        throw new RuntimeException("surface was null");
    }
}

And to ExtractMpegFramesTest setup method:

private void setup() {
    mTextureRender = new STextureRender();
    mTextureRender.surfaceCreated();

    if (VERBOSE) Log.d(TAG, "textureID=" + mTextureRender.getTextureId());
}

Thanks


回答1:


If I correctly understand what you're trying to do, you'd want to decode each frame to a SurfaceTexture, which gives you a GLES "external" texture with the data in it. You could then render that to the TextureView, calling glReadPixels() just before eglSwapBuffers().

You can't read data back once it has been sent to a screen Surface, as the consumer of the data lives in a different process. The efficient video path just passes the "external" texture to the Surface, but that won't work here. Ideally you would clone the external texture reference, forwarding one copy to the display Surface and using the other for rendering to an off-screen buffer that you can pull pixels from. (The Camera2 API can do multi-output tricks like this, but I don't know if it's exposed in MediaCodec. I haven't looked in a while though.)



来源:https://stackoverflow.com/questions/49967214/modify-extractmpegframestest-example-to-render-decoded-output-on-screen

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!