Why doesn't infinity comparison follow the logic applied to NaNs? This code prints out false
three times:
double a = Double.NaN;
double b = Double.NaN;
System.out.println(a == b); // false
System.out.println(a < b); // false
System.out.println(a > b); // false
However, if I change Double.NaN
to Double.POSITIVE_INFINITY
, I get true
for equality, but false
for the greater-than and less-than comparisons:
double a = Double.POSITIVE_INFINITY;
double b = Double.POSITIVE_INFINITY;
System.out.println(a == b); // true
System.out.println(a < b); // false
System.out.println(a > b); // false
This seems dangerous. Assuming that infinite values result from overflows, I imagine it's more likely that two variables that ended up as infinities wouldn't actually be equal in perfect arithmetic.
Your reasoning is that Double.POSITIVE_INFINITY
should not be equal to itself because it is “likely” to have been obtained as the result of a loss of accuracy.
This line of reasoning applies to all of floating-point. Any finite value can be obtained as the result of an inaccurate operation. That did not push the IEEE 754 standardization committee to define ==
as always evaluating to false for finite values, so why should infinities be different?
As defined, ==
is useful for people who understand what it does (that is, test the floating-point values that have been obtained, and certainly not the values that should have been obtained with real computations). For anyone who understands that, and you need to understand it to use floating-point even for computations that do not involve infinity, having Double.POSITIVE_INFINITY == Double.POSITIVE_INFINITY
evaluate to true is convenient, if only to test if the floating-point result of a floating-point computation is Double.POSITIVE_INFINITY
.
That leaves the question of why NaN can afford to have special behavior, and infinities should follow the same general principles as finite values. NaN is different from infinities: the underlying principle of the IEEE 754 standard is that values are exactly what they are, but the result of an operation can be approximated with respect to the real result, and in this case, the resulting floating-point value is obtained according to the rounding mode.
Forget for an instant that 1.0 / 0.0
is defined as +inf, which is an annoyance in this discussion. Think for the moment of Double.POSITIVE_INFINITY
only as the result of operations such as 1.0e100 / 1.0e-300
or Double.MAX_VALUE + Double.MAX_VALUE
. For these operations, +inf is the closest approximation of the real result, just like for operations that produce a finite result. By contrast, NaN is the result you obtain when the operation doesn't make sense. It is defensible to have NaN behave specially, but inf is just an approximation of all the values too large to represent.
In reality, 1.0 / 0.0
also produces +inf, but that should be considered an exception. It would have been just as coherent to define the result of that operation as NaN, but defining it as +inf was more convenient in the implementation of some algorithms. An example is provided page 10 in Kahan's notes. More details than most will wish for are in the article “Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign Bit”. I would also interpret the existence in IEEE 754 of a “division by zero” flag separate from the NaN flag as recognition that the user may want to treat division by zero specially although it is not defined as producing NaN.
Because thats the standard. Infinity represents a number greater than or less than Double.MAX_VALUE/-Double.MAX_VALUE.
NaN represents the outcome of an operation that didn't make sense. That is, the operation didn't possibly come out with a number.
I would guess the logic is once a number gets big enough (infinity) and because of the limitation of floating point numbers, adding numbers to it won't change the outcome, so its 'like' infinity.
So if you want to compare to really big numbers, at some point you might just say those two big numbers are close enough for all intents and purposes. But if you want to compare two things that both aren't numbers, you can't compare them so its false. At least you couldn't compare them as a primitive.
Why are infinities equal? Because it works.
Floating-point arithmetic is designed to produce (relatively) fast computations that preserve errors. The idea being that you don't check for overflows or other nonsense during a lengthy calculation; you wait until it's finished. That's why NaNs propagate the way they do: once you've gotten a NaN, there are very few things you can do that will make it go away. Once the computation is finished you can look for NaNs to check whether something went wrong.
Same for infinities: if there's a possibility of overflow, don't do things that will throw away infinities.
If you want to go slow and safe, IEEE-754 has mechanisms for installing trap handlers to provide callbacks into your code when the result of a calculation would be a NaN or an infinity. Mostly that's not used; it's usually too slow, and pointless once the code has been properly debugged (not that that's easy: people get PhD's in how to do this stuff well).
Another perspective that justifies "infinite" values being equal is to avoid cardinality concept altogether. Essentially if you cannot speculate on "just how infinite a value is compared to another, given that both are infinite" it's simpler to assume that Inf = Inf
.
Edit: as clarification on my comment regarding cardinality I'll give two examples regarding comparison (or equality) of infinite quantities.
Consider the set of positive integers S1 = {1,2,3, ...}
, which is infinite. Also consider the set of even integers S2 = {2,4,6, ...}
, which are also infinite. While there are clearly twice as many elements in S1 as in S2, they have "equally many" elements since you can easily have a one-to-one function between the sets, i.e. 1 -> 2
, 2-> 4
, ... They have thus the same cardinality.
Consider instead the set of real numbers R
, and the set of integers I
. Again both are infinite sets. However for each integer i
there are infinitely many real numbers between (i, i+1)
. Thus no one-to-one function can map the elements of these two sets, and thus their cardinality is different.
Bottomline: equality of infinite quantities is complicated, easier to avoid it in imperative languages :)
To me, it seems that "because it should behave the same as zero" would make a good answer. Arithmetic overflow and underflow should be handlable similarly.
If you underflow from the largest near-infinitesimally small value which can be stored in a float, you get zero, and zeros compare as identical.
If you overflow from the largest near-infinitely large value which can be stored in a float, you get INF, and INFs compare as identical.
This means that code which handles numbers which are out-of-scope in both directions will not require separate special-casing for one or the other. Instead, either both or neither will need to be treated differently.
And the simplest requirement is covered by the "neither" case: you want to check if something over/underflowed, you can compare it to zero/INF using just the normal arithmetic comparison operators, without needing to know you current language's special syntax for the checking command: is it Math.isInfinite(), Float.checkForPositiveInfinity(), hasOverflowed()...?
The correct answer is a simple "because the standard (and the docs) say so". But I'm not gonna be cynical because it's obvious that's not what you are after.
In addition to the other answers here, I'll try to relate the infinities to saturating arithmetic.
Other answers have already stated that the reason the comparisons on NaNs result in true
, so I'm not gonna beat a dead horse.
Let's say I have a saturating integer that represents grayscale colors. Why am I using saturating arithmetic? Because anything brighter than white is still white, and anything darker than black is still black (except orange). That means BLACK - x == BLACK
and WHITE + x == WHITE
. Makes sense?
Now, let's say we want to represent those grayscale colors with a (signed) 1s complement 8-bit integer where BLACK == -127
and WHITE == 127
. Why 1s complement? Because it gives us a signed zero like IEEE 754 floating point. And, because we are using saturating arithmetic, -127 - x == -127
and 127 + x == 127
.
How does this relate to floating point infinities? Replace the integer with floating point, BLACK
with NEGATIVE_INFINITY
, and WHITE
with POSITIVE_INFINITY
and what do you get? NEGATIVE_INFINITY - x == NEGATIVE_INFINITY
and POSITIVE_INFINITY + x == POSITIVE_INFINITY
.
Since you used POSITIVE_INFINITY
, I'll use it also. First we need a class to represent our saturating integer-based color; let's call it SaturatedColor
and assume it works like any other integer in Java. Now, let's take your code and replace double
with our own SaturatedColor
and Double.POSITIVE_INFINITY
with SaturatedColor.WHITE
:
SaturatedColor a = SaturatedColor.WHITE;
SaturatedColor b = SaturatedColor.WHITE;
As we established above, SaturatedColor.WHITE
(just WHITE
above) is 127
, so let's do that here:
SaturatedColor a = 127;
SaturatedColor b = 127;
Now we take the System.out.println
statements you used and replace a
and b
with their value (values?):
System.out.println(127 == 127);
System.out.println(127 < 127);
System.out.println(127 > 127);
It should be obvious what this will print.
Since Double.Nan.equals (Double.NaN) was mentioned: It's one thing what should happen when you perform arithmetic and compare numbers, it's a totally different thing when you consider how objects should behave.
Two typical problem cases are: Sorting an array of numbers, and using hash values to implement dictionaries, sets, and so on. There are two exceptional cases where the normal ordering with <, = and > doesn't apply: One case is that +0 = -0 and the other is that NaN ≠ NaN, and x < NaN, x > NaN, x = NaN will always be false whatever x is.
Sorting algorithms can get into trouble with this. A sorting algorithm may assume that x = x is always true. So if I know that x is stored in an array and look for it, I might not do any bounds check because the search for it must find something. Not if x is NaN. A sorting algorithm may assume that exactly one of a < b and a >= b must be true. Not if one is NaN. So a naive sorting algorithm may crash when NaNs are present. You'd have to decide where you want NaNs to end up when sorting the array, and then change your comparison code so that it works.
Now dictionaries and sets and generally hashing: What if I use an NaN as the key? A set contains unique objects. If the set contains an NaN and I try to add another one, is it unique because it is not equal to the one that is already there? What about +0 and -0, should they be considered equal or different? There's the rule that any two items considered equal must have the same hash value. So the sensible thing is (probably) that a hash function returns one unique value for all NaNs, and one unique value for +0 and -0. And after the hash lookup when you need to find an element with the same hash value that is actually equal, two NaNs should be considered equal (but different from anything else).
That's probably why Double.Nan.equal () behaves different from ==.
This is because NaN is not a number and is therefore not equal to any number including NaN.
来源:https://stackoverflow.com/questions/28584669/why-are-floating-point-infinities-unlike-nans-equal