It seems that uint32_t
is much more prevalent than uint_fast32_t
(I realise this is anecdotal evidence). That seems counter-intuitive to me, though.
I have not seen evidence that uint32_t
be used for its range. Instead, most of the time that I've seen uint32_t
is used, it is to hold exactly 4 octets of data in various algorithms, with guaranteed wraparound and shift semantics!
There are also other reasons to use uint32_t
instead of uint_fast32_t
: Often it is that it will provide stable ABI. Additionally the memory usage can be known accurately. This very much offsets whatever the speed gain would be from uint_fast32_t
, whenever that type would be distinct from that of uint32_t
.
For values < 65536, there is already a handy type, it is called unsigned int
(unsigned short
is required to have at least that range as well, but unsigned int
is of the native word size) For values < 4294967296, there is another called unsigned long
.
And lastly, people do not use uint_fast32_t
because it is annoyingly long to type and easy to mistype :D