It seems that uint32_t
is much more prevalent than uint_fast32_t
(I realise this is anecdotal evidence). That seems counter-intuitive to me, though.
To give a direct answer: I think the real reason why uint32_t
is used over uint_fast32_t
or uint_least32_t
is simply that it is easier to type, and, due to being shorter, much nicer to read: If you make structs with some types, and some of them are uint_fast32_t
or similar, then it's often hard to align them nicely with int
or bool
or other types in C, which are quite short (case in point: char
vs. character
). I of course cannot back this up with hard data, but the other answers can only guess at the reason as well.
As for technical reasons to prefer uint32_t
, I don't think there are any - when you absolutely need an exact 32 bit unsigned int, then this type is your only standardised choice. In almost all other cases, the other variants are technically preferable - specifically, uint_fast32_t
if you are concerned about speed, and uint_least32_t
if you are concerned about storage space. Using uint32_t
in either of these cases risks not being able to compile as the type is not required to exist.
In practise, the uint32_t
and related types exist on all current platforms, except some very rare (nowadays) DSPs or joke implementations, so there is little actual risk in using the exact type. Similarly, while you can run into speed penalties with the fixed-width types, they are (on modern cpus) not crippling anymore.
Which is why, I think, the shorter type simply wins out in most cases, due to programmer lazyness.