Omitting the datatype (e.g. “unsigned” instead of “unsigned int”)

后端 未结 4 2008
猫巷女王i
猫巷女王i 2021-01-31 08:01

I know that if the data type declaration is omitted in C/C++ code in such way: unsigned test=5;, the compiler automatically makes this variable an int (an unsigned

相关标签:
4条回答
  • 2021-01-31 08:44

    Gratuitous verbosity considered harmful. I would never write unsigned int or long int or signed anything (except char or bitfields) because it increases clutter and decreases the amount of meaningful code you can fit in 80 columns. (Or more likely, encourages people to write code that does not fit in 80 columns...)

    0 讨论(0)
  • 2021-01-31 09:00

    unsigned is a data type! And it happens to alias to unsigned int.

    When you’re writing unsigned x; you are not omitting any data type.

    This is completely different from “default int” which exists in C (but not in C++!) where you really omit the type on a declaration and C automatically infers that type to be int.

    As for style, I personally prefer to be explicit and thus to write unsigned int. On the other hand, I’m currently involved in a library where it’s convention to just write unsigned, so I do that instead.

    0 讨论(0)
  • 2021-01-31 09:02

    I would even take it one step further and use stdint's uint32_t type.
    It might be a matter of taste, but I prefer to know what primitive I'm using over some ancient consideration of optimising per platform.

    0 讨论(0)
  • 2021-01-31 09:07

    As @Konrad Rudolph says, unsigned is a datatype. It's really just an alias for unsigned int.

    As to the question of using unsigned being bad practice? I would say no, there is nothing wrong with using unsigned as a datatype specifier. Professionals won't be thrown by this, and any coding standard that says you have to use unsigned int is needlessly draconian, in my view.

    0 讨论(0)
提交回复
热议问题