Is using an unsigned rather than signed int more likely to cause bugs? Why?

后端 未结 7 1920
一个人的身影
一个人的身影 2021-01-30 06:12

In the Google C++ Style Guide, on the topic of \"Unsigned Integers\", it is suggested that

Because of historical accident, the C++ standard also uses unsi

7条回答
  •  北恋
    北恋 (楼主)
    2021-01-30 06:57

    Why is using an unsigned int more likely to cause bugs than using a signed int?

    Using an unsigned type is not more likely to cause bugs than using a signed type with certain classes of tasks.

    Use the right tool for the job.

    What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
    Why is using an unsigned int more likely to cause bugs than using a signed int?

    If the task if well-matched: nothing wrong. No, not more likely.

    Security, encryption, and authentication algorithm count on unsigned modular math.

    Compression/decompression algorithms too as well as various graphic formats benefit and are less buggy with unsigned math.

    Any time bit-wise operators and shifts are used, the unsigned operations do not get messed up with the sign-extension issues of signed math.


    Signed integer math has an intuitive look and feel readily understood by all including learners to coding. C/C++ was not targeted originally nor now should be an intro-language. For rapid coding that employs safety nets concerning overflow, other languages are better suited. For lean fast code, C assumes that coders knows what they are doing (they are experienced).

    A pitfall of signed math today is the ubiquitous 32-bit int that with so many problems is well wide enough for the common tasks without range checking. This leads to complacency that overflow is not coded against. Instead, for (int i=0; i < n; i++) int len = strlen(s); is viewed as OK because n is assumed < INT_MAX and strings will never be too long, rather than being full ranged protected in the first case or using size_t, unsigned or even long long in the 2nd.

    C/C++ developed in an era that included 16-bit as well as 32-bit int and the extra bit an unsigned 16-bit size_t affords was significant. Attention was needed in regard to overflow issues be it int or unsigned.

    With 32-bit (or wider) applications of Google on non-16 bit int/unsigned platforms, affords the lack of attention to +/- overflow of int given its ample range. This makes sense for such applications to encourage int over unsigned. Yet int math is not well protected.

    The narrow 16-bit int/unsigned concerns apply today with select embedded applications.

    Google's guidelines apply well for code they write today. It is not a definitive guideline for the larger wide scope range of C/C++ code.


    One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.

    In C/C++, signed int math overflow is undefined behavior and so not certainly easier to detect than defined behavior of unsigned math.


    As @Chris Uzdavinis well commented, mixing signed and unsigned is best avoided by all (especially beginners) and otherwise coded carefully when needed.

提交回复
热议问题