I have been pondering on whether or not I should use the typedefs inside <cstdint>
or not.
I personally prefer writing uint32_t
over unsigned int
and int8_t
over char
etc... since it to me is alot more intuitive.
What do you guys think? Is it a good idea to use the typedefs from <cstdint>
or not? Are there any disadvantages?
Actually, I would suggest using both.
If you want something that is definitely 32-bits unsigned, use uint32_t. For example, if you are implementing a "struct" to represent an external object whose specification defines one of its fields as 32 bits unsigned.
If you want something that is the "natural word size of the machine", use int or unsigned int. For example:
for (int i = 0 ; i < 200 ; ++i)
// stuff
The "natural word size of the machine" is going to give you the best performance, both on today's processors and on tomorrow's.
Use "char" if you mean "character"; "char" or "unsigned char" if you mean "byte". C/C++ lets you access an arbitrary object's bytes via "char *", not anything else, strictly speaking.
Use uint8_t or int8_t if you specifically want an 8-bit integer, similar to uint32_t.
You should use both. You should use int
, as explained in the other answers, when you need a "reasonably sized" integer. Use char
when you need a character: it's self-documenting.
You should use uint32_t
and friends when interfacing with the outside world in binary: when doing network programming, handling binary files or using foreign multi-byte encodings, etc. In these cases, the exact size of a type is crucial to writing correct, portable, self-documenting code. That's what <stdint.h>
(or C++0x <cstdint>
) is for.
(Endianness is equally crucial, but that's another matter entirely.)
It depends on the purpose of the variable.
If you need a loop counter, use int
. If you need a string, use an array of char
.
If you need a numeric variable that can hold -1 to 100, int8_t
is good. If you need to represent a value from 0 to 100,000 then uint32_t
uint_least32_t
(thanks @Serge) is an excellent choice.
One particular situation in which you'll need to use the typedefs from cstdint is when dealing with code that does a lot of pointer-to-int conversions, in which case using intptr_t is an absolute requirement.
In the company I work for, we are preparing ourselves to migrate from 32bits to 64bits tons of poor-quality C/C++ code that keep casting pointers to int and then back to pointers, which will definitely fail on 64bits architectures, so we'll attempt to sanitize the code whenever possible (i.e. modify data structures and interfaces to remove the need for casts entirely) and use intptr_t instead of int everywhere else.
On a side note: casting in general should raise suspicion, but seriously, casting pointers to integers is almost always the consequence of a serious flaw somewhere in your design. Basically, you're lying to the compiler, the platform and more importantly your co-workers every time you hide a pointer behind an int.
Other than that, like others said: use generic types when possible and type of explicit size when required.
You basically seems to not have any clue between uint32_t and unsigned int. This is perfectly normal, as you don't necessarily know how your type will be used later.
Just use a typedef.
Regardless of if you need a unsigned int or a uint32_t (which you can think of later, when you have a more complete view of what your program will be), using a typedef will help you make your code clearer by specifying what you are really manipulating, and it will make it more easy to change to another type when you figure out months after that your initial choice was the worst one. There isn't a "right answer" here, because you usually figure those things out the hard way. Interoperating between some library that wants uint32_t and some other library that wants ints is painful.
Use template
s and generic programming wherever possible. Don't rely on any types if you don't have to!
If you have a function that takes a number and returns it multiplied by 2, write it like this:
template <typename Number>
inline Number doubleNum(Number n) {
return 2*n;
}
Or even this:
template <typename RetType, typename NumberType>
inline RetType doubleNum(NumberType n) {
return 2*n;
}
This way, if you have a library that uses int
s, double
s, uint64_t
s - you name it - you can work with it, without rewriting your code. If you need to work with binary files or network programming, you can work with fixed-size types, without rewriting your code. And if you need arbitrary precision numbers, you can work with a class which implements the same interface as a primitive integer or float type via operator overloading, such as a GMP wrapper, without rewriting your code.
And you can specialize templated functions or classes, to optimize specific cases or work with classes (or C struct
s) that don't conform to the relevant interface:
/*
* optimization:
* primitive integers can be bit-shifted left to
* achieve multiplication by powers of 2
* (note that any decent compiler will do this
* for you, behind the hood.)
*/
template <>
inline int doubleNum<int>(int n) {
return n<<1;
}
/*
* optimization:
* use assembly code
*/
template <>
inline int32_t doubleNum<int32_t>(int32_t n) {
asm {
...
}
}
/*
* work with awkward number class
*/
template <>
inline AwkwardNumber doubleNum<AwkwardNumber>(AwkwardNumber n) {
n.multiplyBy(2);
return n;
}
来源:https://stackoverflow.com/questions/6144682/should-i-use-cstdint