How to directly update pixels - with CGImage and direct CGDataProvider

笑着哭i 提交于 2019-12-29 15:00:23

问题


Actual Question

Several answers will solve my problem:

  1. Can I force a CGImage to reload its data from a direct data provider (created with CGDataProviderCreateDirect) like CGContextDrawImage does? Or is there some other way I can get setting to self.layer.contents to do it?
  2. Is there a CGContext configuration, or trick I can use to render 1024x768 images at least 30 fps consistently with CGContextDrawImage.
  3. Has anyone been able to successfully use CVOpenGLESTextureCacheCreateTextureFromImage for realtime buffer updates with their own texture data? I think my biggest problem is creating a CVImageBuffer as I copied the other properties from Apples' documentation for textures. If anyone has any more information on this that would be awesome.
  4. Any other guidelines on how I can get an image from memory onto the screen at 30 fps.

Background (lots):

I am working on a project where I need to modify the pixels of NPOT image data in realtime (minimum of 30 fps) and draw that on the screen in iOS.

My first thought was to use OpenGL with glTexSubimage2D to update, unfortunately that ended up being really slow (6 fps on iPad) as the driver swizzels and converts my RGB data every frame to BGR. So send it in BGR you say, and so do I but for some reason you cannot call glTexSubImage2D with GL_BGR go figure. I know some slowness is because of it being non power of 2 image data but my requirements dictate that.

More reading led me to CVOpenGLESTextureCacheCreateTextureFromImage but all examples are of it using direct camera input to obtain a CVImageBufferRef I tried using the documentation (no official yet just header comments) to make my own CVImageBuffer form my image data, but it would not work with this (no errors just an empty texture in the debugger), which makes me think Apple built this specifically to process realtime camera data and it has not been tested much outside of this area but IDK.

Anyway after giving up my dignity by dumping OpenGL and switching my thoughts to CoreGraphics I was led to this question fastest way to draw a screen buffer on the iphone which recommends using a CGImage backed by CGDataProviderCreateDirect, which allows you to return a pointer to image data when the CGImage needs it, awesome right? Well it doesn't seem to quite work as advertised. If I use CGContextDrawImage then everything works. I can modify the pixel buffer, and every draw, it requests the image data from my data provider like it should, calling the methods in CGDataProviderDirectCallbacks (Note: they seem to have a built in optimization the ignores the updated pointer if it has the same address as the pervious). CGContextDrawImage is not super fast (about 18 fps) even with disabling interpolation which brought that up from like 6 fps. Apple's docs tell me using self.layer.contents will be much faster than CGContextDrawImage. Using self.layer.contents works for the first assignment but the CGImage never requests a reload from the data provider like the CGContextDrawImage does, even when I call [layer setNeedsDisplay]. In the SO question I referenced the user shows his solution to the problem by creating and destroying a new CGImage from the data source every frame, a hopelessly slow process (yes I did try it), so time for the real question.

Note: I have profiled all these operations and know the problem really is glTexSubImage for OpenGL and CGContextDrawImage is really the problem from CoreGraphics so no "go profile" answers.

EDIT Source code demonstrating this technique can now be found at http://github.com/narpas/image-sequence-streaming


回答1:


Thanks to Brad Larson and David H for helping out with this one (see our full discussion in comments). Turns out using OpenGL and CoreVideo with CVOpenGLESTextureCache ended up being the fastest way to push raw images to the screen (I knew CoreGraphics couldn't be the fastest!), giving me 60 fps with fullscreen 1024x768 images on an iPad 1. There is little documentation on this now so I will try and explain as much as possible to help people:

CVOpenGLESTextureCacheCreateTextureFromImage allows you to create an OpenGL texture that has memory directly mapped to the CVImageBuffer you use to create it. This allows you to create say a CVPixelBuffer with your raw data and modify the data pointer gathered from CVPixelBufferGetBaseAddress. This gives you instant results in OpenGL without any need to modify or reupload the actual texture. Just be sure to lock with CVPixelBufferLockBaseAddress before modifying pixels and unlock when your done. Note, At this time this does not work in the iOS Simulator, only on device which I speculate to be from VRAM/RAM division, where the CPU has no direct access to VRAM. Brad recommended using a conditional compiler check to switch between a raw glTexImage2D updates and using texture caches.

Several things to watch out for (a combination of these caused it to not work for me):

  1. Test on device
  2. Make sure you CVPixelBuffer is backed with kCVPixelBufferIOSurfacePropertiesKey see link for example (Thanks again Brad).
  3. You must use GL_CLAMP_TO_EDGE for NPOT texture data with OpenGL ES 2.0
  4. Bind texture caches with glBindTexture(CVOpenGLESTextureGetTarget(_cvTexture), CVOpenGLESTextureGetName(_cvTexture)); don't be stupid like me and use CVOpenGLESTextureGetTarget for both parameters.
  5. Don't recreate the texture every frame simply copy image data into the pointer obtained from CVPixelBufferGetBaseAddress to update the texture.


来源:https://stackoverflow.com/questions/12187801/how-to-directly-update-pixels-with-cgimage-and-direct-cgdataprovider

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!