SurfaceTexture's onFrameAvailable() method always called too late

匿名 (未验证) 提交于 2019-12-03 02:45:02

问题:

I'm trying to get the following MediaExtractor example to work:

http://bigflake.com/mediacodec/ - ExtractMpegFramesTest.java (requires 4.1, API 16)

The problem I have is that outputSurface.awaitNewImage(); seems to always throw RuntimeException("frame wait timed out"), which is thrown whenever the mFrameSyncObject.wait(TIMEOUT_MS) call times out. No matter what I set TIMEOUT_MS to be, onFrameAvailable() always gets called right after the timeout occurs. I tried with 50ms and with 30000ms and it's the same.

It seems like the onFrameAvailable() call can't be done while the thread is busy, and once the timeout happens which ends the thread code execution, it can parse the onFrameAvailable() call.

Has anyone managed to get this example to work, or knows how MediaExtractor is supposed to work with GL textures?

Edit: tried this on devices with API 4.4 and 4.1.1 and the same happens on both.

Edit 2:

Got it working on 4.4 thanks to fadden. The issue was that the ExtractMpegFramesWrapper.runTest() method called th.join(); which blocked the main thread and prevented the onFrameAvailable() call from being processed. Once I commented th.join(); it works on 4.4. I guess maybe the ExtractMpegFramesWrapper.runTest() itself was supposed to run on yet another thread so the main thread didn't get blocked.

There was also a small issue on 4.1.2 when calling codec.configure(), it gave the error:

A/ACodec(2566): frameworks/av/media/libstagefright/ACodec.cpp:1041 CHECK(def.nBufferSize >= size) failed. A/libc(2566): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 2625 (CodecLooper) 

Which I solved by adding the following before the call:

format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0); 

However the problem I have now on both 4.1.1 (Galaxy S2 GT-I9100) and 4.1.2 (Samsung Galaxy Tab GT-P3110) is that they both always set info.size to 0 for all frames. Here is the log output:

loop input buffer not available no output from decoder available loop input buffer not available no output from decoder available loop input buffer not available no output from decoder available loop input buffer not available no output from decoder available loop submitted frame 0 to dec, size=20562 no output from decoder available loop submitted frame 1 to dec, size=7193 no output from decoder available loop [... skipped 18 lines ...] submitted frame 8 to dec, size=6531 no output from decoder available loop submitted frame 9 to dec, size=5639 decoder output format changed: {height=240, what=1869968451, color-format=19, slice-height=240, crop-left=0, width=320, crop-bottom=239, crop-top=0, mime=video/raw, stride=320, crop-right=319} loop submitted frame 10 to dec, size=6272 surface decoder given buffer 0 (size=0) loop [... skipped 1211 lines ...] submitted frame 409 to dec, size=456 surface decoder given buffer 1 (size=0) loop sent input EOS surface decoder given buffer 0 (size=0) loop surface decoder given buffer 1 (size=0) loop surface decoder given buffer 0 (size=0) loop surface decoder given buffer 1 (size=0) loop [... skipped 27 lines all with size=0 ...] surface decoder given buffer 1 (size=0) loop surface decoder given buffer 0 (size=0) output EOS Saving 0 frames took ? us per frame // edited to avoid division-by-zero error 

So no images get saved. However the same code and video works on 4.3. The video I am using is an .mp4 file with "H264 - MPEG-4 AVC (avc1)" video codec and "MPEG AAAC Audio (mp4a)" audio codec.

I also tried other video formats, but they seem to die even sooner on 4.1.x, while both work on 4.3.

Edit 3:

I did as you suggested, and it seems to save the frame images correctly. Thank you.

Regarding KEY_MAX_INPUT_SIZE, I tried not setting, or setting it to 0, 20, 200, ... 200000000, all with the same result of info.size=0.

I am now unable to set the render to a SurfaceView or TextureView on my layout. I tried replacing this line:

mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId()); 

with this, where surfaceTexture is a SurfaceTexture defined in my xml-layout:

mSurfaceTexture = textureView.getSurfaceTexture(); mSurfaceTexture.attachToGLContext(mTextureRender.getTextureId()); 

but it throws a weird error with getMessage()==null on the second line. I couldn't find any other way to get it to draw on a View of some kind. How can I change the decoder to display the frames on a Surface/SurfaceView/TextureView instead of saving them?

回答1:

The way SurfaceTexture works makes this a bit tricky to get right.

The docs say the frame-available callback "is called on an arbitrary thread". The SurfaceTexture class has a bit of code that does the following when initializing (line 318):

if (this thread has a looper) {     handle events on this thread } else if (there's a "main" looper) {     handle events on the main UI thread } else {     no events for you } 

The frame-available events are delivered to your app through the usual Looper / Handler mechanism. That mechanism is just a message queue, which means the thread needs to be sitting in the Looper event loop waiting for them to arrive. The trouble is, if you're sleeping in awaitNewImage(), you're not watching the Looper queue. So the event arrives, but nobody sees it. Eventually awaitNewImage() times out, and the thread returns to watching the event queue, where it immediately discovers the pending "new frame" message.

So the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage(). In the ExtractMpegFramesTest example, this is done by running the test in a newly-created thread (see the ExtractMpegFramesWrapper class), which does not have a Looper. (For some reason the thread that executes CTS tests has a looper.) The frame-available events arrive on the main UI thread.

Update (for "edit 3"): I'm a bit sad that ignoring the "size" field helped, but pre-4.3 it's hard to predict how devices will behave.

If you just want to display the frame, pass the Surface you get from the SurfaceView or TextureView into the MediaCodec decoder configure() call. Then you don't have to mess with SurfaceTexture at all -- the frames will be displayed as you decode them. See the two "Play video" activities in Grafika for examples.

If you really want to go through a SurfaceTexture, you need to change CodecOutputSurface to render to a window surface rather than a pbuffer. (The off-screen rendering is done so we can use glReadPixels() in a headless test.)



易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!