Using OpenGL ES texture caches instead of glReadPixels to get texture data

喜欢而已 提交于 2019-12-07 05:34:32

问题


In iOS 5, OpenGL ES Texture caches were introduced to provide a direct way from the camera video data to OpenGL without the need of copying the buffers. There was a brief introduction to texture caches in session 414 - Advances in OpenGL ES for iOS 5 of WWDC 2011.

I found an interesting article which abuses this concept further in the end and circumvents a call to glReadPixels by simply locking the texture, and then accessing the buffer directly.

glReadPixels is really slow due to the tile-based renderer which is used in iPad 2 (even when you use only 1x1 textures). However, the described method seems to process faster than glReadPixels.

Is the proposed method in the article even valid and can it be used to boost applications which rely on glReadPixels?

Since OpenGL processes graphics data in parallel to the CPU, how should the CVPixelBufferLockBaseAddress call know when the rendering is done without talking to OpenGL?


回答1:


I describe a means of doing this in this answer, based on your above-linked article and Apple's ChromaKey sample from WWDC 2011. Given that Apple used this in one of their samples, and that I've not heard anything countering this from their OpenGL ES engineers, I believe this to be a valid use of the texture caches. It works on every iOS 5.x-compatible device that I've tried, and works in iOS 5.0 and 5.1 just as well. It's much, much faster than glReadPixels().

As far as when to lock the pixel buffer base address, you should be able to use glFlush() or the like to block until all data has been rendered to your FBO texture. This seems to work for the 30 FPS 1080p movie encoding I've done from texture-backed FBOs.



来源:https://stackoverflow.com/questions/9261450/using-opengl-es-texture-caches-instead-of-glreadpixels-to-get-texture-data

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!