问题
This is a follow-up to my previous question about why size_t is necessary.
Given that size_t is guaranteed to be big enough to represent the largest size of a block of memory you can allocate (meaning there can still be some integers bigger than size_t), my question is...
What determines how much you can allocate at once?
回答1:
The architecture of your machine, the operating system (but the two are intertwined) and your compiler/set of libraries determines how much memory you can allocate at once.
malloc
doesn't need to be able to use all the memory the OS could give him. The OS doesn't need to make available all the memory present in the machine (and various versions of Windows Server for example have different maximum memory for licensing reasons)
But note that the OS can make available more memory than the one present in the machine, and even more memory than the one permitted by the motherboard (let's say the motherboard has a single memory slot that accepts only 1gb memory stick, Windows could still let a program allocate 2gb of memory). This is done throught the use of Virtual Memory, Paging (you know, the swap file, your old and slow friend :-) Or, for example, through the use of NUMA.
回答2:
I can think of three constraints, in actual code:
- The biggest unsigned int size_t is able to allocate. size_t should be the same type (same size, etc.) the OS' memory allocation mechanism is using.
- The biggest block the operating system is able to handle in RAM (how are block's size represented? how this representation affects the maximum block size?).
- Memory fragmentation (largest free block) and the total available free RAM.
来源:https://stackoverflow.com/questions/7850482/what-determines-how-much-memory-can-be-allocated