Is using an unsigned rather than signed int more likely to cause bugs? Why?

后端 未结 7 1934
一个人的身影
一个人的身影 2021-01-30 06:12

In the Google C++ Style Guide, on the topic of \"Unsigned Integers\", it is suggested that

Because of historical accident, the C++ standard also uses unsi

7条回答
  •  臣服心动
    2021-01-30 06:51

    One of the most hair-raising examples of an error is when you MIX signed and unsigned values:

    #include 
    int main()  {
        auto qualifier = -1 < 1u ? "makes" : "does not make";
        std::cout << "The world " << qualifier << " sense" << std::endl;
    }
    

    The output:

    The world does not make sense

    Unless you have a trivial application, it's inevitable you'll end up with either dangerous mixes between signed and unsigned values (resulting in runtime errors) or if you crank up warnings and make them compile-time errors, you end up with a lot of static_casts in your code. That's why it's best to strictly use signed integers for types for math or logical comparison. Only use unsigned for bitmasks and types representing bits.

    Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea. Most numbers are closer to 0 than they are to 2 billion, so with unsigned types, a lot of your values are closer to the edge of the valid range. To make things worse, the final value may be in a known positive range, but while evaluating expressions, intermediate values may underflow and if they are used in intermediate form may be VERY wrong values. Finally, even if your values are expected to always be positive, that doesn't mean that they won't interact with other variables that can be negative, and so you end up with a forced situation of mixing signed and unsigned types, which is the worst place to be.

提交回复
热议问题