For my current project I\'m reading the main camera output of the iPhone. I\'m then converting the pixelbuffer to a cached OpenGL texture through the method: CVOpenGLE
We can't cast a still image texture to a CVOpenGLESTextureCacheRef. Core Video lets you map video frames directly to OpenGL textures. Using a video buffer where Core Video creates the textures and gives them to us, already in video memory.
To create the opengles texture this link may help you Link
The iPhone 4 (as well as the iPhone 3GS and iPod Touch 4th gen.) uses a PowerVR SGX 535 GPU, for which the maximum OpenGL ES texture size is 2048x2048. This value can be found by calling
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);
The iPod Touch 4th gen. has a camera resolution of 720x960 and the iPhone 3GS, 640x1136, but the iPhone 4's rear-facing camera resolution is 1936x2592, which is too large to fit onto a single texture.
You can always rewrite the captured image at a smaller size, while preserving the aspect ratio (1529x2048). Brad Larson does this over on his GPUImage framework, but it's pretty straightforward, just redrawing the data of the original pixel buffer using Core Graphics and then making another pixel buffer out of the redrawn data. The rest of the framework is a great resource, as well.