Difference between uint and unsigned int?

后端 未结 5 1447
清歌不尽
清歌不尽 2021-02-01 00:23

Is there any difference between uint and unsigned int? I\'m looking in the site, but all question refers to C# or C++. I\'d like to have an answer conc

相关标签:
5条回答
  • 2021-02-01 01:02

    I am extending a bit answers by Erik, Teoman Soygul and taskinoor

    uint is not a standard.

    Hence using your own shorthand like this is discouraged:

    typedef unsigned int uint;

    If you look for platform specificity instead (e.g. you need to specify the number of bits your int occupy), including stdint.h:

    #include <stdint.h>
    

    will expose the following standard categories of integers:

    • Integer types having certain exact widths

    • Integer types having at least certain specified widths

    • Fastest integer types having at least certain specified widths

    • Integer types wide enough to hold pointers to objects

    • Integer types having greatest width

    For instance,

    Exact-width integer types

    The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two's-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.

    The typedef name uint N _t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.

    defines

    int8_t
    int16_t
    int32_t
    uint8_t
    uint16_t
    uint32_t
    
    0 讨论(0)
  • 2021-02-01 01:05

    uint isn't a standard type - unsigned int is.

    0 讨论(0)
  • 2021-02-01 01:06

    Some systems may define uint as a typedef.

    typedef unsigned int uint;

    For these systems they are same. But uint is not a standard type, so every system may not support it and thus it is not portable.

    0 讨论(0)
  • 2021-02-01 01:07

    All of the answers here fail to mention the real reason for uint.
    It's obviously a typedef of unsigned int, but that doesn't explain its usefulness.

    The real question is,

    Why would someone want to typedef a fundamental type to an abbreviated version?

    To save on typing?
    No, they did it out of necessity.

    Consider the C language; a language that does not have templates.
    How would you go about stamping out your own vector that can hold any type?

    You could do something with void pointers,
    but a closer emulation of templates would have you resorting to macros.

    So you would define your template vector:

    #define define_vector(type) \
      typedef struct vector_##type { \
        impl \
      };
    

    Declare your types:

    define_vector(int)
    define_vector(float)
    define_vector(unsigned int)
    

    And upon generation, realize that the types ought to be a single token:

    typedef struct vector_int { impl };
    typedef struct vector_float { impl };
    typedef struct vector_unsigned int { impl };
    
    0 讨论(0)
  • 2021-02-01 01:18

    The unsigned int is a built in (standard) type so if you want your project to be cross-platform, always use unsigned int as it is guarantied to be supported by all compilers (hence being the standard).

    0 讨论(0)
提交回复
热议问题