I know that if the data type declaration is omitted in C/C++ code in such way: unsigned test=5;
, the compiler automatically makes this variable an int (an unsigned
Gratuitous verbosity considered harmful. I would never write unsigned int
or long int
or signed
anything (except char
or bitfields) because it increases clutter and decreases the amount of meaningful code you can fit in 80 columns. (Or more likely, encourages people to write code that does not fit in 80 columns...)
unsigned
is a data type! And it happens to alias to unsigned int
.
When you’re writing unsigned x;
you are not omitting any data type.
This is completely different from “default int
” which exists in C (but not in C++!) where you really omit the type on a declaration and C automatically infers that type to be int
.
As for style, I personally prefer to be explicit and thus to write unsigned int
. On the other hand, I’m currently involved in a library where it’s convention to just write unsigned
, so I do that instead.
I would even take it one step further and use stdint's uint32_t type.
It might be a matter of taste, but I prefer to know what primitive I'm using over some ancient consideration of optimising per platform.
As @Konrad Rudolph says, unsigned
is a datatype. It's really just an alias for unsigned int
.
As to the question of using unsigned
being bad practice? I would say no, there is nothing wrong with using unsigned
as a datatype specifier. Professionals won't be thrown by this, and any coding standard that says you have to use unsigned int
is needlessly draconian, in my view.