In an article on MSDN, it states that the double
data type has a range of \"-1.79769313486232e308 .. 1.79769313486232e308\". Whereas the long
data
A simple answer is that double
is only accurate to 15-16 total digits, as opposed to long
which (as an integer type) has an absolute accuracy within an explicit digit limit, in this case 19 digits. (Keep in mind that digits and values are semantically different.)
double
: -/+ 0.000,000,000,000,01 to +/- 99,999,999,999,999.9
(at 100% accuracy, with a loss in accuracy starting from 16th digit, as represented in "-1.79769313486232e308 .. 1.79769313486232e308".)
long
: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807
ulong
: 0 to 18,446,744,073,709,551,615 (1 more digit than long, but identical value range (since it's only been shifted to exclude negative returns).
In general, int-type real numbers are preferred over floating-point decimal values, unless you explicitly need a decimal representation (for whichever purpose).
In addition, you may know that signed are preferred over unsigned, since the former is much less bug-prone (consider the statement uint i;
, then i - x;
where x > i
).