A little test program:
#include
const float TEST_FLOAT = 1/60;
const float TEST_A = 1;
const float TEST_B = 60;
const float TEST_C = TEST_A /
In
1/60
Both of the operands are integers, so integer arithmetic is performed. To perform floating point arithmetic, at least one of the operands needs to have a floating point type. For example, any of the following would perform floating point division:
1.0/60
1.0/60.0
1/60.0
(You might choose to use 1.0f
instead, to avoid any precision reduction warnings; 1.0
has type double
, while 1.0f
has type float
)
Shouldn't
TEST_FLOAT
have the same value thanTEST_C
?
In the TEST_FLOAT
case, integer division is performed and then the result of the integer division is converted to float
in the assignment.
In the TEST_C
case, the integer literals 1
and 60
are converted to float
when they are assigned to TEST_A
and TEST_B
; then floating-point division is performed on those floats and the result is assigned to TEST_C
.
Is
TEST_C
value resolved at compile time or at runtime?
It depends on the compiler; either method would be standards-conforming.