It is ok to do this?
double doubleVariable=0.0;
if (doubleVariable==0) {
...
}
Or this code would suffer from potential rounding problem
What you have there is 0
, which is an integer literal. It is implicitly converted to a double which you could represent with the double literal 0.0
(implicit conversion). Then there is a comparison between the two doubles. A rounding error could cause doubleVariable
to not be equal to 0.0
(by some other math you might do, not just setting it), but there could never be a rounding error when converting the integer 0 to double. The code you have there is totally safe, but I would favor == 0.0
instead.
You should not use double for such comparision.
double creates problem.
e.g double n1=0.55
double n2=100
then double ans=n1*n2
should be 55.0
but when you debug ans is 55.000000000000007
.
if(ans==55.0)
will fail. in such case you can face a problem.
If you're just comparing a double variable against 0.0 (or 0), I believe it's safe to do it that way because I think 0 can be represented exactly in floating point, but I'm not 100% sure.
In general, the suggested approach for comparing floating point numbers is to choose a "delta" value at which you'll consider two doubles to be equal if their difference is less than the delta. This handles exact representation limitations with floating point numbers.
double first = 1.234;
double second = 1.2345;
double difference = Math.Abs(first - second);
double threshold = 0.000001; // doubles are equal if their difference is less than this value - you choose this value based on your needs
bool areEqual = difference < threshold;
Nope it's perfectly legal if you are only going to compare against 0 as the right side of comparison will automatically casted to double. On the other hand, it would yield all the round-off errors if you where to compare against == 0.10000001
You are better or reading the discussion about float to 0 comparison here: Is it safe to check floating point values for equality to 0?
Also this discussion is very informative about weird precision problems on floats: Why the result is different for this problem?
i.e. below will yield false:
double d1 = 1.000001; double d2 =0.000001;
Console.WriteLine((d1-d2)==1.0);
hmm... I think as long as the number has an exact binary fraction representation (like 0) the comparison is perfectly valid.
Try:
if (double.Equals(doubleValue, 0.0)){}