The C standard guarantees that an int
is able to store every possible array size. At least, that\'s what I understand from reading §6.5.2.1, subsection 1 (Array
When the C Standard was written, it was common for machines to have a 16-bit "int" type, and be incapable of handling any single object larger than 65535 bytes, but nonetheless be capable of handling objects larger than 32767 bytes. Since arithmetic on an unsigned int would be large enough to handle the largest size of such objects, but arithmetic on signed int would not, size_t was defined to be unsigned so as to accommodate such objects without having to use "long" computations.
On machines where the maximum allowable object size is between INT_MAX and UINT_MAX, the difference between pointers to the start and end of such an object may be too large to fit in "int". While the Standard doesn't impose any requirements for how implementations should handle that, a common approach is to define integer and pointer wrap-around behavior such that if S and E are pointers to the start and end of a char[49152], then even though E-S would exceed INT_MAX, it will yield a value which, when added to S, will yield E.
Nowadays, there's seldom any real advantage to the fact that size_t is an unsigned type (since code which needs objects larger than 2GB would often need to use 64-bit pointers for other reasons) and it causes many kinds of comparisons involving object sizes to behave counter-intuitively, but the fact that sizeof expressions yield an unsigned type is sufficiently well entrenched that it's unlikely ever to change.