glReadPixels only saves 1/4 screen size snapshots

空扰寡人 提交于 2019-12-13 14:12:34

问题


I'm working on an Augmented Reality app for a client. The OpenGL and EAGL part has been done in Unity 3D, and implemented into a View in my application.

What i need now, is a button that snaps a screenshot of the OpenGL content, which is the backmost view.

I tried writing it myself, but when i click a button with the assigned IBAction, it only saves 1/4 of the screen (the lower left corner) - though it does save it to the camera roll.

So basically, how can i make it save the entire screensize, instead of just one fourth?

here's my code for the method:

-(IBAction)tagBillede:(id)sender
{
    UIImage *outputImage = nil;

    CGRect s = CGRectMake(0, 0, 320, 480);            
    uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);

    if (!buffer) goto error;

    glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);    

    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);

    if (!ref) goto error;

    CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);

    if (!iref) goto error;        

    size_t width = CGImageGetWidth(iref);    
    size_t height = CGImageGetHeight(iref);    
    size_t length = width * height * 4;            
    uint32_t *pixels = (uint32_t *)malloc(length);

    if (!pixels) goto error;

    CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,    
        CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);

    if (!context) goto error;            

    CGAffineTransform transform = CGAffineTransformIdentity;    
    transform = CGAffineTransformMakeTranslation(0.0f, height);    
    transform = CGAffineTransformScale(transform, 1.0, -1.0);    
    CGContextConcatCTM(context, transform);            
    CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);         
    CGImageRef outputRef = CGBitmapContextCreateImage(context);

    if (!outputRef) goto error;

    outputImage = [UIImage imageWithCGImage: outputRef]; 

    if (!outputImage) goto error;    

    CGDataProviderRelease(ref);   
    CGImageRelease(iref);    
    CGContextRelease(context);    
    CGImageRelease(outputRef);
    free(pixels);
    free(buffer);

    UIImageWriteToSavedPhotosAlbum(outputImage, self, @selector(image: didFinishSavingWithError: contextInfo:), nil);
}

回答1:


I suspect you are using a device with a Retina display, which is 640x960. You need to take the screen scale into account; it is 1.0 on non-Retina displays and 2.0 on Retina displays. Try initializing s like this:

CGFloat scale = UIScreen.mainScreen.scale;
CGRect s = CGRectMake(0, 0, 320 * scale, 480 * scale);            



回答2:


If the device is a retina device, you need to scale the opengl stuff yourself. You're actually specifying that you want the lower-left corner by only capturing half the width and half the height.

You need to double both your width and height for the retina screens, but realistically, you should be multiplying it by the screen's scale:

CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);



回答3:


Thought i'd chime and, and at the same time, throw some gratitude :)

I got it working like a charm now, here's the cleaned up code:

UIImage *outputImage = nil;

CGFloat scale = [[UIScreen mainScreen] scale];
CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);            
uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);

glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);    

CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);

CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);

size_t width = CGImageGetWidth(iref);    
size_t height = CGImageGetHeight(iref);    
size_t length = width * height * 4;            
uint32_t *pixels = (uint32_t *)malloc(length);

CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4,    
                                             CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);           

CGAffineTransform transform = CGAffineTransformIdentity;    
transform = CGAffineTransformMakeTranslation(0.0f, height);    
transform = CGAffineTransformScale(transform, 1.0, -1.0);    
CGContextConcatCTM(context, transform);            
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);         
CGImageRef outputRef = CGBitmapContextCreateImage(context);

outputImage = [UIImage imageWithCGImage: outputRef]; 

CGDataProviderRelease(ref);   
CGImageRelease(iref);    
CGContextRelease(context);    
CGImageRelease(outputRef);
free(pixels);
free(buffer);

UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);


来源:https://stackoverflow.com/questions/9155515/glreadpixels-only-saves-1-4-screen-size-snapshots

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!