Zero-copy Camera Processing and Rendering Pipeline on Android

后端 未结 1 1387
时光说笑
时光说笑 2021-02-06 00:41

I need to do a CPU-side read-only process on live camera data (from just the Y plane) followed by rendering it on the GPU. Frames shouldn\'t be rendered until processing complet

相关标签:
1条回答
  • 2021-02-06 01:14

    Interesting question.

    Background Stuff

    Having multiple threads with independent contexts is very common. Every app that uses hardware-accelerated View rendering has a GLES context on the main thread, so any app that uses GLSurfaceView (or rolls their own EGL with a SurfaceView or TextureView and an independent render thread) is actively using multiple contexts.

    Every TextureView has a SurfaceTexture inside it, so any app that uses multiple TextureViews has multiple SurfaceTextures on a single thread. (The framework actually had a bug in its implementation that caused problems with multiple TextureViews, but that was a high-level issue, not a driver problem.)

    SurfaceTexture, a/k/a GLConsumer, doesn't do a whole lot of processing. When a frame arrives from the source (in your case, the camera), it uses some EGL functions to "wrap" the buffer as an "external" texture. You can't do these EGL operations without an EGL context to work in, which is why SurfaceTexture has to be attached to one, and why you can't put a new frame into a texture if the wrong context is current. You can see from the implementation of updateTexImage() that it's doing a lot of arcane things with buffer queues and textures and fences, but none of it requires copying pixel data. The only system resource you're really tying up is RAM, which is not inconsiderable if you're capturing high-resolution images.

    Connections

    An EGL context can be moved between threads, but can only be "current" on one thread at a time. Simultaneous access from multiple threads would require a lot of undesirable synchronization. A given thread has only one "current" context. The OpenGL API evolved from single-threaded with global state to multi-threaded, and rather than rewrite the API they just shoved state into thread-local storage... hence the notion of "current".

    It's possible to create EGL contexts that share certain things between them, including textures, but if these contexts are on different threads you have to be very careful when the textures are updated. Grafika provides a nice example of getting it wrong.

    SurfaceTextures are built on top of BufferQueues, which have a producer-consumer structure. The fun thing about SurfaceTextures is that they include both sides, so you can feed data in one side and pull it out the other within a single process (unlike, say, SurfaceView, where the consumer is far away). Like all Surface stuff, they're built on top of Binder IPC, so you can feed the Surface from one thread, and safely updateTexImage() in a different thread (or process). The API is arranged such that you create the SurfaceTexture on the consumer side (your process) and then pass a reference to the producer (e.g. camera, which primarily runs in the mediaserver process).

    Implementation

    You'll induce a bunch of overhead if you're constantly connecting and disconnecting BufferQueues. So if you want to have three SurfaceTextures receiving buffers, you'll need to connect all three to Camera2's output, and let all of them receive the "buffer broadcast". Then you updateTexImage() in a round-robin fashion. Since SurfaceTexture's BufferQueue runs in "async" mode, you should always get the newest frame with each call, with no need to "drain" a queue.

    This arrangement wasn't really possible until the Lollipop-era BufferQueue multi-output changes and the introduction of Camera2, so I don't know if anyone has tried this approach before.

    All of the SurfaceTextures would be attached to the same EGL context, ideally in a thread other than the View UI thread, so you don't have to fight over what's current. If you want to access the texture from a second context in a different thread, you will need to use the SurfaceTexture attach/detach API calls, which explicitly support this approach:

    A new OpenGL ES texture object is created and populated with the SurfaceTexture image frame that was current at the time of the last call to detachFromGLContext().

    Remember that switching EGL contexts is a consumer-side operation, and has no bearing on the connection to the camera, which is a producer-side operation. The overhead involved in moving a SurfaceTexture between contexts should be minor -- less than updateTexImage() -- but you need to take the usual steps to ensure synchronization when communicating between threads.

    It's too bad ImageReader lacks a getTimestamp() call, as that would greatly simplify matching up buffers from the camera.

    Conclusion

    Using multiple SurfaceTextures to buffer output is possible but tricky. I can see a potential advantage to a ping-pong buffer approach, where one ST is used to receive a frame in thread/context A while the other ST is used for rendering in thread/context B, but since you're operating in real time I don't think there's value in additional buffering unless you're trying to pad out the timing.

    As always, the Android System-Level Graphics Architecture doc is recommended reading.

    0 讨论(0)
提交回复
热议问题