How to get UIImage from EAGLView?

后端 未结 6 472
悲&欢浪女
悲&欢浪女 2020-11-28 09:09

I am trying to get a UIImage from what is displayed in my EAGLView. Any suggestions on how to do this?

相关标签:
6条回答
  • 2020-11-28 09:20

    I was unable to get the other answers here to work correctly for me.

    After a few days I finally got a working solution to this. There is code provided by Apple which produces a UIImage from a EAGLView. Then you simply need to flip the image vertically since UIkit is upside down.

    Apple Provided Method - Modified to be inside the view you want to make into an image.

        -(UIImage *) drawableToCGImage 
    {
    GLint backingWidth2, backingHeight2;
    //Bind the color renderbuffer used to render the OpenGL ES view
    // If your application only creates a single color renderbuffer which is already bound at this point,
    // this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
    // Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
    
    // Get the size of the backing CAEAGLLayer
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
    
    NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
    NSInteger dataLength = width2 * height2 * 4;
    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
    
    // Read pixel data from the framebuffer
    glPixelStorei(GL_PACK_ALIGNMENT, 4);
    glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
    
    // Create a CGImage with the pixel data
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
    // otherwise, use kCGImageAlphaPremultipliedLast
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
                                    ref, NULL, true, kCGRenderingIntentDefault);
    
    // OpenGL ES measures data in PIXELS
    // Create a graphics context with the target size measured in POINTS
    NSInteger widthInPoints, heightInPoints;
    if (NULL != UIGraphicsBeginImageContextWithOptions) {
        // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
        // Set the scale parameter to your OpenGL ES view's contentScaleFactor
        // so that you get a high-resolution snapshot when its value is greater than 1.0
        CGFloat scale = self.contentScaleFactor;
        widthInPoints = width2 / scale;
        heightInPoints = height2 / scale;
        UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
    }
    else {
        // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
        widthInPoints = width2;
        heightInPoints = height2;
        UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
    }
    
    CGContextRef cgcontext = UIGraphicsGetCurrentContext();
    
    // UIKit coordinate system is upside down to GL/Quartz coordinate system
    // Flip the CGImage by rendering it to the flipped bitmap context
    // The size of the destination area is measured in POINTS
    CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
    CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
    
    // Retrieve the UIImage from the current context
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    
    UIGraphicsEndImageContext();
    
    // Clean up
    free(data);
    CFRelease(ref);
    CFRelease(colorspace);
    CGImageRelease(iref);
    
    return image;
    

    }

    And heres a method to flip the image

    - (UIImage *) flipImageVertically:(UIImage *)originalImage {
    UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
    UIGraphicsBeginImageContext(tempImageView.frame.size);
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGAffineTransform flipVertical = CGAffineTransformMake(
                                                           1, 0, 0, -1, 0, tempImageView.frame.size.height
                                                           );
    CGContextConcatCTM(context, flipVertical);
    
    [tempImageView.layer renderInContext:context];
    
    UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    //[tempImageView release];
    
    return flippedImage;
    

    }

    And here's a link to the Apple dev page where I found the first method for reference. http://developer.apple.com/library/ios/#qa/qa1704/_index.html

    0 讨论(0)
  • 2020-11-28 09:25
    -(UIImage *) saveImageFromGLView
    {
        NSInteger myDataLength = 320 * 480 * 4;
        // allocate array and read pixels into it.
        GLubyte *buffer = (GLubyte *) malloc(myDataLength);
        glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
        // gl renders "upside down" so swap top to bottom into new array.
        // there's gotta be a better way, but this works.
        GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
        for(int y = 0; y <480; y++)
        {
            for(int x = 0; x <320 * 4; x++)
            {
                buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
            }
        }
        // make data provider with data.
        CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
        // prep the ingredients
        int bitsPerComponent = 8;
        int bitsPerPixel = 32;
        int bytesPerRow = 4 * 320;
        CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
        CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
        CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
        // make the cgimage
        CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
        // then make the uiimage from that
        UIImage *myImage = [UIImage imageWithCGImage:imageRef];
    
        CGImageRelease( imageRef );
        CGDataProviderRelease(provider);
        CGColorSpaceRelease(colorSpaceRef);
        free(buffer2);
    
        return myImage;
    }
    
    0 讨论(0)
  • 2020-11-28 09:26

    CGDataProviderCreateWithData comes with a release callback to release the data, where you should do the release:

    void releaseBufferData(void *info, const void *data, size_t size)
    {
        free((void*)data);
    }
    

    Then do this like other examples, but NOT to free data here:

    GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferData, bufferDataSize, releaseBufferData);
    ....
    CGDataProviderRelease(provider);
    

    Or simply use CGDataProviderCreateWithCFData without release callback stuff instead:

    GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
    NSData *data = [NSData dataWithBytes:bufferData length:bufferDataSize];
    CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
    ....
    CGDataProviderRelease(provider);
    free(bufferData); // Remember to free it
    

    For more information, please check this discuss:

    What's the right memory management pattern for buffer->CGImageRef->UIImage?

    0 讨论(0)
  • 2020-11-28 09:30

    EDIT: as demianturner notes below, you no longer need to render the layer, you can (and should) now use the higher-level [UIView drawViewHierarchyInRect:]. Other than that; this should work the same.

    An EAGLView is just a kind of view, and its underlying CAEAGLLayer is just a kind of layer. That means, that the standard approach for converting a view/layer into a UIImage will work. (The fact that the linked question is UIWebview doesn't matter; that's just yet another kind of view.)

    0 讨论(0)
  • 2020-11-28 09:30

    With this above code of Brad Larson, you have to edit your EAGLView.m

    - (id)initWithCoder:(NSCoder*)coder{
        self = [super initWithCoder:coder];
        if (self) {
            CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
            eaglLayer.opaque = TRUE;
            eaglLayer.drawableProperties = 
                [NSDictionary dictionaryWithObjectsAndKeys: 
                    [NSNumber numberWithBool:YES],  kEAGLDrawablePropertyRetainedBacking, 
                    kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
        }
        return self;
    }
    

    You have to change numberWithBool value YES

    0 讨论(0)
  • 2020-11-28 09:37

    Here is a cleaned up version of Quakeboy's code. I tested it on iPad, and works just fine. The improvements include:

    • works with any size EAGLView
    • works with retina display (point scale 2)
    • replaced nested loop with memcpy
    • cleaned up memory leaks
    • saves the UIImage in the photoalbum as a bonus.

    Use this as a method in your EAGLView:

    -(void)snapUIImage
    {
        int s = 1;
        UIScreen* screen = [ UIScreen mainScreen ];
        if ( [ screen respondsToSelector:@selector(scale) ] )
            s = (int) [ screen scale ];
    
        const int w = self.frame.size.width;
        const int h = self.frame.size.height;
        const NSInteger myDataLength = w * h * 4 * s * s;
        // allocate array and read pixels into it.
        GLubyte *buffer = (GLubyte *) malloc(myDataLength);
        glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
        // gl renders "upside down" so swap top to bottom into new array.
        // there's gotta be a better way, but this works.
        GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
        for(int y = 0; y < h*s; y++)
        {
            memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
        }
        free(buffer); // work with the flipped buffer, so get rid of the original one.
    
        // make data provider with data.
        CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
        // prep the ingredients
        int bitsPerComponent = 8;
        int bitsPerPixel = 32;
        int bytesPerRow = 4 * w * s;
        CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
        CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
        CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
        // make the cgimage
        CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
        // then make the uiimage from that
        UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
        UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
        CGImageRelease( imageRef );
        CGDataProviderRelease(provider);
        CGColorSpaceRelease(colorSpaceRef);
        free(buffer2);
    }
    
    0 讨论(0)
提交回复
热议问题