When to use different integer types?

前端 未结 8 1213
猫巷女王i
猫巷女王i 2021-02-13 20:54

Programming languages (e.g. c, c++, and java) usually have several types for integer arithmetic:

  • signed and unsigned types
  • types
相关标签:
8条回答
  • 2021-02-13 20:59

    The default integral type (int) gets a "first among equals" preferential treatment in pretty much all languages. So we can use that as a default, if no reasons to prefer another type exist.

    Such reasons might be:

    • Using a bigger type if you know you need the additional range, or a smaller type if you want to conserve memory and don't mind the smaller range.
    • Using an unsigned type to make sure that you don't get any "extra" 1s in your integer representation if you intend to use bit shifting operators (<< and >>).
    • If the language does not guarantee a minimum (or even fixed) size for a type (e.g. C/C++ vs C#/Java), and you care about its properties, you should prefer some mechanism of generating a type with guaranteed size (e.g. int32_t) -- if your program is meant to be portable and expected to be compiled with different compilers, this becomes more important.

    Update (expanding on guaranteed size types)

    My personal opinion is that types with no guaranteed fixed size are more trouble than worth today. I won't go into the historical reasons that gave birth to them (briefly: source code portability), but the reality is that in 2011 very few people, if any, stand to benefit from them.

    On the other hand, there are lots of things that can go wrong when using such types:

    • The type turns out to not have the necessary range
    • You access the underlying memory for a variable (maybe to serialize it) but due to the processor's endianness and the non-fixed size of the type you end up introducing a bug

    For these reasons (and there are probably others too), using such types is in theory a major pain. Additionally, unless extreme portability is a requirement, you don't stand to benefit at all to compensate. And indeed, the whole purpose of typedefs like int32_t is to eliminate usage of loosely sized types entirely.

    As a practical matter, if you know that your program is not going to be ported to another compiler or architecture, you can ignore the fact that the types have no fixed size and treat them as if they are the known size your compiler uses for them.

    0 讨论(0)
  • 2021-02-13 21:02

    In general you should use the type that suits the requirements of your program and promotes readability and future maintainability as much as possible.

    Having said that, as Chris points out people do use shorts vs ints to save memory. Think about the following scenario, you have 1,000,000 (a fairly small number) ints (typically 32 bytes) vs shorts (typically 16 bytes). If you know you'll never need to represent a number larger than 32,767, you could use a short. Or you could use an unsigned short if you know you'll never need to represent a number larger than 65,535. This would save: ((32 - 16) x 1,000,000) = 16k of memory.

    0 讨论(0)
  • 2021-02-13 21:03

    use shorter to save memory, longer to be able to represent larger numbers. If you don't have such requirements, consider what APIs you'll be sharing data with and set yourself up so you don't have to cast or convert too much.

    0 讨论(0)
  • 2021-02-13 21:10

    Maybe just for fun, here you have a simple example showing how according to what type you choose you have one result or an other.

    Naturally the actual reason why you would choose one type or an other, in my opinion, is related to other factors. For instance the great shift operator.

    #include <iostream>
    #include <cmath>
    using namespace std;
    
    int main()
    {
        int i;
    
        //unsigned long long x;
        //int x;
        short x;
    
        x = 2;
        for (i=2; i<15; ++i)
        {
            x=pow(x,2);
            cout << x << endl;
        }
        return 0;
    }
    
    0 讨论(0)
  • 2021-02-13 21:14

    One by one to your questions:

    1. signed and unsigned : depends on what you need. If you're sure, that the number will be unsigned - use unsigned. This will give you the opportunity to use bigger numbers. For example, a signed char (1B) has range [-128:127], but if it's unsigned - the max value is doubled (you have one more bit to use - sign bit, so unsigned char could be 255 (all bits are 1)

    2. short, int, long, long long - these are pretty clear, aren't it? The smallest integer (except char) is short, next one is int, etc. But these ones are platform dependent - int could be 2B (long long ago :D ), 4B (usually). long could be 4B (in 32bit platform), or 8B (on 64bit platform), etc. long long is not standard type in C++ (it will be in C++0x), but usually it's a typedef for int64_t.

    3. int32_t vs int - int32_t and other types like this guarantee their size. For example, int32_t is guaranteed to be 32bit, while, as I already said, the size of int is platform dependent.

    0 讨论(0)
  • 2021-02-13 21:19

    Typically, you use int, unless you need to expand it because you need a larger range or you want to shrink it because you know the value only makes sense in a smaller range. It's incredibly rare that you would need to change due to memory considerations- the difference between them is miniscule.

    0 讨论(0)
提交回复
热议问题