Please explain why the following pieces of code behave differently.
#include
int main(){
float a=0.1;
if(a<0.1)
printf(\"less\");
else
Floating point numbers are not exact. Specifically, your number is not necessarily compared against a float. The same code works as you expect if you use '0.7f
' instead of '0.7
' (on at least my compiler), but you should generally be comparing against a threshold, as the previous answer states.
I would recommend you reading What Every Computer Scientist Should Know About Floating-Point Arithmetic. Basically when working with floating point numbers you should always check if a number is equal, smaller than or greater than some other number given some precision (epsilon) that you have defined.
There are two different types involved here: float
and double
. You're assigning to a float
, but then comparing with a double
.
Imagine that float
and double
were actually 2 and 4-digit decimal floating point types instead. Now imagine that you had:
float a = 0.567123 // Actually assigns 0.57
if (a < 0.567123) // Actually compares 0.5700 with 0.5671
{
// We won't get in here
}
float a = 0.123412; // Actually assigns 0.12
if (a < 0.123412) // Actually compares 0.1200 with 0.1234
{
// We will get in here
}
Obviously this is an approximation of what's going on, but it explains the two different kinds of results.
It's hard to say what you should be doing without more information - it may well be that you shouldn't be using float
and double
at all, or that you should be comparing using some level of tolerance, or you should be using double
everywhere, or you should be accepting some level of "inaccuracy" as just part of how the system works.
#include<stdio.h>
int main() {
float a = 0.7;
if (a < 0.7)
printf("less");
else
printf("greater than equal");
getchar();
}
Unsuffixed floating constants are of type double
not float
. For example, 0.7
is a floating constant of type double
.
if (a < 0.7)
As the right operand in the comparison expression is of type double
, the usual arithmetic conversion apply and the a
value is promoted to double
. double
and float
don't have the same precision. In your case to get the correct result you should have used a floating constant of type float
.
if (a < 0.7f)
0.7f
is a floating constant of type float
.
You cannot use comparison operators on floating point numbers reliably.
A good way of comparing two floating point numbers is to have a accuracy threshold which is relative to the magnitude of the two floating point numbers being compared.
Something like:
#include < math.h >
if(fabs(a - b) <= accurary_threshold * fabs(a))
Good Read:
When you perform the comparison, you are comparing a float (with 24 bits of precision) with a double (with 53 bits of of precision). Your float value is created by rounding from the more precise double value. Sometimes it rounds down and sometimes it rounds. It will hardly ever be the same.
Try your example again but test for all three possible results: equal, less, and greater.
Try your example again with a as a double.
Try your example again and compare against a float.