Why would uint32_t be preferred rather than uint_fast32_t?

前端 未结 11 1176
没有蜡笔的小新
没有蜡笔的小新 2021-01-31 00:58

It seems that uint32_t is much more prevalent than uint_fast32_t (I realise this is anecdotal evidence). That seems counter-intuitive to me, though.

11条回答
  •  长发绾君心
    2021-01-31 01:51

    In many cases, when an algorithm works on an array of data, the best way to improve performance is to minimize the number of cache misses. The smaller each element, the more of them can fit into the cache. This is why a lot of code is still written to use 32-bit pointers on 64-bit machines: they don’t need anything close to 4 GiB of data, but the cost of making all pointers and offsets need eight bytes instead of four would be substantial.

    There are also some ABIs and protocols specified to need exactly 32 bits, for example, IPv4 addresses. That’s what uint32_t really means: use exactly 32 bits, regardless of whether that’s efficient on the CPU or not. These used to be declared as long or unsigned long, which caused a lot of problems during the 64-bit transition. If you just need an unsigned type that holds numbers up to at least 2³²-1, that’s been the definition of unsigned long since the first C standard came out. In practice, though, enough old code assumed that a long could hold any pointer or file offset or timestamp, and enough old code assumed that it was exactly 32 bits wide, that compilers can’t necessarily make long the same as int_fast32_t without breaking too much stuff.

    In theory, it would be more future-proof for a program to use uint_least32_t, and maybe even load uint_least32_t elements into a uint_fast32_t variable for calculations. An implementation that had no uint32_t type at all could even declare itself in formal compliance with the standard! (It just wouldn’t be able to compile many existing programs.) In practice, there’s no architecture any more where int, uint32_t, and uint_least32_t are not the same, and no advantage, currently, to the performance of uint_fast32_t. So why overcomplicate things?

    Yet look at the reason all the 32_t types needed to exist when we already had long, and you’ll see that those assumptions have blown up in our faces before. Your code might well end up running someday on a machine where exact-width 32-bit calculations are slower than the native word size, and you would have been better off using uint_least32_t for storage and uint_fast32_t for calculation religiously. Or if you’ll cross that bridge when you get to it and just want something simple, there’s unsigned long.

提交回复
热议问题