GLPaint save image

后端 未结 5 1253
南笙
南笙 2021-01-31 23:39

I\'m trying to develop a complex painting application on the iPhone. I\'m currently drawing using Quartz (e.g. CGContext...). Unfortunately the Quartz overhead is

相关标签:
5条回答
  • 2021-02-01 00:20
    -(UIImage *) saveImageFromGLView
    {
        NSInteger myDataLength = 320 * 480 * 4;
        // allocate array and read pixels into it.
        GLubyte *buffer = (GLubyte *) malloc(myDataLength);
        glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
        // gl renders "upside down" so swap top to bottom into new array.
        // there's gotta be a better way, but this works.
        GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
        for(int y = 0; y <480; y++)
        {
            for(int x = 0; x <320 * 4; x++)
            {
                buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
            }
        }
        // make data provider with data.
        CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
        // prep the ingredients
        int bitsPerComponent = 8;
        int bitsPerPixel = 32;
        int bytesPerRow = 4 * 320;
        CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
        CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
        CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
        // make the cgimage
        CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
        // then make the uiimage from that
        UIImage *myImage = [UIImage imageWithCGImage:imageRef];
        return myImage;
    }
    
    0 讨论(0)
  • 2021-02-01 00:22

    This code will not leak memory like the above solution and accounts for dynamic view size as well as retina vs standard displays:

    -(BOOL)iPhoneRetina{
        return ([[UIScreen mainScreen] respondsToSelector:@selector(displayLinkWithTarget:selector:)] && ([UIScreen mainScreen].scale == 2.0))?YES:NO;
    }
    
    void releasePixels(void *info, const void *data, size_t size) {
        free((void*)data);
    }
    
    -(UIImage *) glToUIImage{
    
        int imageWidth, imageHeight;
    
        int scale = [self iPhoneRetina]?2:1;
    
        imageWidth = self.frame.size.width*scale;
        imageHeight = self.frame.size.height*scale;
    
        NSInteger myDataLength = imageWidth * imageHeight * 4;
    
        // allocate array and read pixels into it.
        GLubyte *buffer = (GLubyte *) malloc(myDataLength);
        glReadPixels(0, 0, imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
    
        // make data provider with data.
        CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, releasePixels);
    
        // prep the ingredients
        int bitsPerComponent = 8;
        int bitsPerPixel = 32;
        int bytesPerRow = 4 * imageWidth;
        CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
        CGBitmapInfo bitmapInfo =  kCGImageAlphaPremultipliedLast;
        CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
    
        // make the cgimage
    
        CGImageRef imageRef = CGImageCreate(imageWidth, imageHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
    
        UIImage *myImage = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationDownMirrored]; //Render image flipped, since OpenGL's data is mirrored
    
        CGImageRelease(imageRef);
        CGColorSpaceRelease(colorSpaceRef);
    
        CGDataProviderRelease(provider);
    
        return myImage;
    }
    

    The others leak memory because the last parameter to CGDataProviderCreateWithData is supposed to be a function to free memory, and they also leave out the CGRelease functions.

    0 讨论(0)
  • 2021-02-01 00:23

    Same as @Quakeboy's answer, but passing in the view so that the size can be dynamically determined (I used this for my universal app):

    - (UIImage *)saveImageFromGLView:(UIView *)glView {
        int width = glView.frame.size.width;
        int height = glView.frame.size.height;
    
        NSInteger myDataLength = width * height * 4;
        // allocate array and read pixels into it.
        GLubyte *buffer = (GLubyte *) malloc(myDataLength);
        glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
        // gl renders "upside down" so swap top to bottom into new array.
        // there's gotta be a better way, but this works.
        GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
        for(int y = 0; y < height; y++)
        {
            for(int x = 0; x < width * 4; x++)
            {
                buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x];
            }
        }
        // make data provider with data.
        CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
        // prep the ingredients
        int bitsPerComponent = 8;
        int bitsPerPixel = 32;
        int bytesPerRow = 4 * width;
        CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
        CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
        CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
        // make the cgimage
        CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
        // then make the uiimage from that
        UIImage *myImage = [UIImage imageWithCGImage:imageRef];
        return myImage;
    }
    
    0 讨论(0)
  • 2021-02-01 00:30
    void SaveScreenImage()
    
    {
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
    CGImageRef cgImage = UIGetScreenImage();
    void *imageBytes = NULL;
    if (cgImage == NULL) {
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    imageBytes = malloc(320 * 480 * 4);
    CGContextRef context = CGBitmapContextCreate(imageBytes, 320, 480, 8, 320 * 4, colorspace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
        CGColorSpaceRelease(colorspace);
    for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
            CGRect bounds = [window bounds];
            CALayer *layer = [window layer];
            CGContextSaveGState(context);
            if ([layer contentsAreFlipped]) {
                CGContextTranslateCTM(context, 0.0f, bounds.size.height);
                CGContextScaleCTM(context, 1.0f, -1.0f);
            }
    [layer renderInContext:(CGContextRef)context];
            CGContextRestoreGState(context);
        }
        cgImage = CGBitmapContextCreateImage(context);
        CGContextRelease(context);
    }
    UIImage *image=[UIImage imageWithCGImage:cgImage];
    UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil); 
    [pool release];
    }
    

    This code will save as you see on the screen.But it maybe private api.

    0 讨论(0)
  • 2021-02-01 00:44

    It's definitely possible. The trick is to use glReadPixels to pull the image data out of the OpenGL framebuffer into memory you can use. Once you have a pointer to the image data, you can use CGDataProviderCreateWithData and CGImageCreate to create a CGImage from the data. I'm working on an OpenGL-based drawing app that uses this technique a lot!

    0 讨论(0)
提交回复
热议问题