Could anyone tell me why these two modulus calculations yield two different outcomes? I just need to blame someone or something but me for all those hours I lost finding this bug.
public void test1()
{
int stepAmount = 100;
float t = 0.02f;
float remainder = t % (1f / stepAmount);
Debug.Log("Remainder: " + remainder);
// Remainder: 0.01
float fractions = 1f / stepAmount;
remainder = t % fractions;
Debug.Log("Remainder: " + remainder);
// Remainder: 0
}
Using VS-2017 V15.3.5
My best bet is that this is due to the liberty the runtime has to perform floating point operations with higher precision than the types involved and then truncating the result to the type's precision when assigning:
The CLI specification in section 12.1.3 dictates an exact precision for floating point numbers, float and double, when used in storage locations. However it allows for the precision to be exceeded when floating point numbers are used in other locations like the execution stack, arguments return values, etc … What precision is used is left to the runtime and underlying hardware. This extra precision can lead to subtle differences in floating point evaluations between different machines or runtimes.
Source here.
In you first example t % (1f / stepAmount)
can be performed entirely with a higher precision than float
and then truncated when the result is assigned to remainder
, while in the second example, 1f / stepAmount
is truncated and assigned to fractions
prior to the modulus operation.
As to why making stepamount
a const
makes both modulus operations consistent, the reason is that 1f / stepamount
immediately becomes a constant expression that is evaluated and truncated to float precision at compile time and is no different from writing 0.01f
which essentially makes both examples equivalent.
来源:https://stackoverflow.com/questions/47189863/modulus-gives-wrong-outcome