问题
I have built a camera using AVFoundation
.
Once my AVCaptureStillImageOutput
has completed its captureStillImageAsynchronouslyFromConnection:completionHandler:
method, I create a NSData object like this:
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
Once I have the NSData
object, I would like to rotate the image -without- converting to a UIImage
. I have found out that I can convert to a CGImage
to do so.
After I have the imageData, I start the process of converting to CGImage, but I have found that the CGImageRef
ends up being THIRTY times larger than the NSData
object.
Here is the code I use to convert to CGImage
from NSData
:
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(imageData));
CGImageRef imageRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
If I try to NSLog
out the size of the image, it comes to 30 megabytes when the NSData
was a 1.5-2 megabyte image!
size_t imageSize = CGImageGetBytesPerRow(imageRef) * CGImageGetHeight(imageRef);
NSLog(@"cgimage size = %zu",imageSize);
I thought that maybe when you go from NSData to CGImage, the image decompresses, and then maybe if I converted back to NSData, that it might go back to the right file size.
imageData = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
The above NSData
has the same length
as the CGImageRef
object.
If I try to save the image, the image is a 30mb image that cannot be opened.
I am totally new to using CGImage, so I am not sure if I am converting from NSData to CGImage and back incorrectly, or if I need to call some method to decompress again.
Thanks in advance,
Will
回答1:
I was doing some image manipulation and came across your question on SO. Seems like no one else came up with an answer, so here's my theory.
While it's theoretically possible to convert a CGImageRef
back to NSData
in the manner that you described, the data itself is invalid and not a real JPEG or PNG, as you discovered by it not being readable. So I don't think that the NSData.length
is correct. You have to actually jump through a number of steps to recreate an NSData
representation of a CGImageRef
:
// incoming image data
NSData *image;
// create the image ref
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef) image);
CGImageRef imageRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
// image metadata properties (EXIF, GPS, TIFF, etc)
NSDictionary *properties;
// create the new output data
CFMutableDataRef newImageData = CFDataCreateMutable(NULL, 0);
// my code assumes JPEG type since the input is from the iOS device camera
CFStringRef type = UTTypeCreatePreferredIdentifierForTag(kUTTagClassMIMEType, (__bridge CFStringRef) @"image/jpg", kUTTypeImage);
// create the destination
CGImageDestinationRef destination = CGImageDestinationCreateWithData(newImageData, type, 1, NULL);
// add the image to the destination
CGImageDestinationAddImage(destination, imageRef, (__bridge CFDictionaryRef) properties);
// finalize the write
CGImageDestinationFinalize(destination);
// memory cleanup
CGDataProviderRelease(imgDataProvider);
CGImageRelease(imageRef);
CFRelease(type);
CFRelease(destination);
NSData *newImage = (__bridge_transfer NSData *)newImageData;
With these steps, the newImage.length
should be the same as image.length
. I haven't tested since I actually do cropping between the input and the output, but based on the crop, the size is roughly what I expected (the output is roughly half the pixels of the input and thus the output length roughly half the size of the input length).
来源:https://stackoverflow.com/questions/19527902/converting-nsdata-to-cgimage-and-then-back-to-nsdata-makes-the-file-too-big