问题
I am using GLKit, and at 24 fps I need to take a CGImageRef (no alpha layer) and apply another CGImageRef (no alpha layer) as a mask to it (black and white image) and render the result as a GLKit texture.
At first I tried this approach:
CGImageRef actualMask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask(imgRef, actualMask);
And although would work when assigning the resulting UIImage to a UIImageView, this would not work as a GLKit texture (the mask would not be applied). However, if I redraw the image using:
UIImage *maskedImg = [UIImage imageWithCGImage:masked];
UIGraphicsBeginImageContext(maskedImg.size);
[maskedImg drawInRect:CGRectMake(0, 0, maskedImg.size.width, maskedImg.size.height)];
UIImage* resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The resulting image would be masked and render correctly in the GLKit texture. However, this lowers the performance to about 6 fps on an iPhone 4S.
I have tried this masking approach without success.
I have tried doing this without Core Graphics and using GLKit blending as shown in this example by Ray Wenderlich. However, this requires an alpha transparency on the mask and this is a deal breaker for me.
I also found an interesting example of doing exactly what I want to do with a library called AVAnimator and an example called KittyBoom. Here they are replacing pixels manually. I want to get this same result using GLKit.
Any further direction or guidance would be helpful here. Thanks in advance.
回答1:
GLKit is a framework with four major components that may be used together or separately: view/view controller, math library, texture loader, and effects. I presume from your question that the issue your having concerns GLKBaseEffect
.
One might say GLKBaseEffect
's purpose in life is to make routine things easy. It's primarily a replacement for the OpenGL ES 1.1 fixed-function pipeline and some of the common tricks done with it.
What you're after isn't a routine fixed-function pipeline task, so to do it well means stepping beyond the basic functionality of GLKBaseEffect
and writing your own shaders. That's the bad news. The good news is that once you're into the shader world, this is pretty easy. The other good news is that you can expect great performance -- instead of using the CPU to do CoreGraphics blending, you can do it on the GPU.
The "hard" part, if you're unfamiliar with OpenGL ES 2.0 (or 3.0) shaders, is replacing the client OpenGL ES code and vertex shader provided by GLKBaseEffect
with with custom code that does the same. There's lots of example code and tutorials for this, out there, though. (Since you've been following Ray Wenderlich's site, I'm sure you'll find some good ones there.) The main things you need to do here are:
- Transform vertices from the coordinate space you're using for your
glDraw
commands into clip space. For 2D stuff this is pretty easy, since you can use the same matrix setup you were doing forGLKBaseEffect
'stransform
property. - Load your two textures (
GLKTextureLoader
can help you a lot here) and bind them to two texture units. - In your client code, associate each vertex with some texture coordinates. (You're probably doing this already.)
- In your vertex shader, take the texture coordinates from a
uniform
input and pass them to avarying
output. (You don't have to do anything else unless you want to transform the coordinates for some reason.)
After all that setup, the part that actually accomplishes the effect you're after is pretty easy, and it's all in your fragment shader:
// texture units bound in client code
uniform sampler2D colorTexture;
uniform sampler2D maskTexture;
// texture coordinate passed in from vertex shader
// (and automatically interpolated across the face of the polygon)
varying mediump vec2 texCoord;
void main() {
// read a color from the color texture
lowp vec3 color = texture2D(colorTexture, texCoord).rgb;
// read a gray level from the mask texture
// (the mask is grayscale, so we only need one component
// of its color because the others are the same)
lowp float maskLevel = texture2D(maskTexture, texCoord).r;
// use the gray level from the mask as the alpha value in the output color
gl_FragColor = vec4(color, maskLevel);
}
Also, since you mention doing this at 24 fps, it suggests you're working with video. In that case, you can cut out the middleman -- video frames are already being handled on the GPU, and putting them in CGImages on the CPU just slows you down. Take a look at the CVOpenGLESTexture
class and the GLCameraRipple sample code for help getting video into a texture.
来源:https://stackoverflow.com/questions/19550488/glkit-masking-or-blending-2-textures-both-from-jpegs-no-alphas