I\'m building a photo book layout application. The application frequently decompresses JPEG images into in-memory bitmap buffers. The size of the images is constrained to 10
You may be running out of swap space. Even though you have a swap file and virtual memory, the amount of swap space available is still limited by the space on your hard disk for swap files.
initWithBytes:length:
tries to allocate its entire length in active memory, essentially equivalent to malloc()
of that size. If the length exceeds available memory, you will get nil. If you want to use large files with NSData
, I'd recommend initWithContentsOfMappedFile:
or similar initializers, as they use the VM system to pull parts of the file in and out of active memory when needed.
Another guess, but it may be that your colleague's machine is configured with a stricter maximum memory per user process setting. To check, type
ulimit -a
Into a console. For me, I get:
~ iainmcgin$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited
From my settings above, it seems there is no per-process limit on memory usage. This may not be the case for your colleague, for some reason.
I'm using Snow Leopard:
~ iainmcgin$ uname -rs Darwin 10.6.0
Even though a 64 bit computer can theoretically address 18 EB, current processors are limited to 256 TB. Of course, you aren't reaching this limit either. But the amount of memory your process can use at one time is limited to the amount of RAM available. The OS may also limit the amount of RAM you can use. According to the link you posted, "Even for computers that have 4 or more gigabytes of RAM available, the system rarely dedicates this much RAM to a single process."
It could be a memory fragmentation issue. Perhaps there are not any single contiguous chunks of 400 MB available at the time of allocation?
You could try to allocate these large chunks at the very start of your application's life cycle, before the heap gets a chance to become fragmented by numerous smaller allocations.
The answer lies in the implementation of libauto.
As of OS X 10.6 an arena of 8 Gb is allocated for garbage collected memory on 64-bit platforms. This arena is cut in half for large allocations (>=128k) and small (<2048b) or medium (<128k) allocations.
So in effect on 10.6 you have 4Gb of memory available for large allocations of garbage collected memory. On 10.5 the arena had a size of 32Gb, but Apple lowered that size to 8Gb on 10.6.