I have a method that needs to parse through a bunch of large PNG images pixel by pixel (the PNGs are 600x600 pixels each). It seems to work great on the Simulator, but on the device (iPad), i get an EXC_BAD_ACCESS in some internal memory copying function. It seems the size is the culprit because if I try it on smaller images, everything seems to work. Here's the memory related meat of method below.
+ (CGRect) getAlphaBoundsForUImage: (UIImage*) image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
memset(rawData,0,height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
/* non-memory related stuff */
free(rawData);
When I run this on a bunch of images, it runs 12 times and then craps out, while on the simulator it runs no problem. Do you guys have any ideas?
Running 12 times and then crashing sounds like a running out of memory problem. It might be that internally the CGContext is creating some large autoreleased structures. Since you're doing this in a loop, they are not getting freed, so you run out of memory and die.
I'm not sure how Core Foundation deals with temporary objects though. I don't think CF objects have the equivalent of autorelease, and a Core Graphics context is almost certainly dealing with CF objects rather than NSObjects.
To reduce the memory churn in your code, I would suggest refactoring it to create an offscreen CGContext once before you start processing, and use it repeatedly to process each image. Then release it when you are done. That is going to be faster in any case (since you aren't allocating huge data structures on each pass through the loop.)
I'll wager that will eliminate your crash problem, and I bet it also makes your code much, much faster. Memory allocation is very slow compared to other operations, and you're slinging around some pretty big data structures to handle 600x600 pixel RGBA images.
Goto Product -> Edit Schemes -> Enable Zombie objects. Put a tick mark before Enable Zombie objects. Now build and run it. It can give you better and to the point description for EXC_BAD_ACCES error.
I was getting a similar crash on my iPad (iPhoneOS 3.2 of course) using CGImageCreate(). Seeing your difficulty gave me a hint. I solved the problem by aligning my bytesPerRow to the next largest power of 2.
size_t bytesPerRowPower2 = (size_t) round( pow( 2.0, trunc( log((double) bytesPerRow) / log(2.0) ) + 1.0 ) );
Let us know if providing power of 2 row alignment also solves your problem. You would need to allocate *rawData with the adjusted size and pass bytesPerRowPower2 to CGBitmapContextCreate()... The height does not seem to need alignment.
Perhaps CGImageGetBytesPerRow (power of two sounds excessive).
I used a power of two aligned row size in an app that was defining the row size directly, as I was creating images programmatically. But I would also recommend toastie to try CGImageGetBytesPerRow before trying my suggestion as well.
Try to Run instruments allocation while you are running the application on device, it cold be' a memory related issue. If you are in a loop, create inside that loop an auto release pool.
Maybe replace
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
with
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGColorSpaceRelease(colorSpace);
as the color space ref might still be needed somehow? Just a random guess... And maybe even put it behind the release of the context?
// cleanup
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
I solved the problem by making a square context (width=height). I was having a 512x256 texture and it crashing every time I sent the data to OpenGL. Now I allocate a 512x512 buffer BUT STILL render 512x256. Hope this helps.
来源:https://stackoverflow.com/questions/2619224/cgbitmapcontextcreate-on-the-iphone-ipad