When to use different integer types?

前端 未结 8 1185
猫巷女王i
猫巷女王i 2021-02-13 20:54

Programming languages (e.g. c, c++, and java) usually have several types for integer arithmetic:

  • signed and unsigned types
  • types
8条回答
  •  予麋鹿
    予麋鹿 (楼主)
    2021-02-13 20:59

    The default integral type (int) gets a "first among equals" preferential treatment in pretty much all languages. So we can use that as a default, if no reasons to prefer another type exist.

    Such reasons might be:

    • Using a bigger type if you know you need the additional range, or a smaller type if you want to conserve memory and don't mind the smaller range.
    • Using an unsigned type to make sure that you don't get any "extra" 1s in your integer representation if you intend to use bit shifting operators (<< and >>).
    • If the language does not guarantee a minimum (or even fixed) size for a type (e.g. C/C++ vs C#/Java), and you care about its properties, you should prefer some mechanism of generating a type with guaranteed size (e.g. int32_t) -- if your program is meant to be portable and expected to be compiled with different compilers, this becomes more important.

    Update (expanding on guaranteed size types)

    My personal opinion is that types with no guaranteed fixed size are more trouble than worth today. I won't go into the historical reasons that gave birth to them (briefly: source code portability), but the reality is that in 2011 very few people, if any, stand to benefit from them.

    On the other hand, there are lots of things that can go wrong when using such types:

    • The type turns out to not have the necessary range
    • You access the underlying memory for a variable (maybe to serialize it) but due to the processor's endianness and the non-fixed size of the type you end up introducing a bug

    For these reasons (and there are probably others too), using such types is in theory a major pain. Additionally, unless extreme portability is a requirement, you don't stand to benefit at all to compensate. And indeed, the whole purpose of typedefs like int32_t is to eliminate usage of loosely sized types entirely.

    As a practical matter, if you know that your program is not going to be ported to another compiler or architecture, you can ignore the fact that the types have no fixed size and treat them as if they are the known size your compiler uses for them.

提交回复
热议问题