问题
I'm reading this article about exponent bias
in floating point numbers and it says the following:
n IEEE 754 floating point numbers, the exponent is biased in the engineering sense of the word – the value stored is offset from the actual value by the exponent bias. Biasing is done because exponents have to be signed values in order to be able to represent both tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder. To solve this problem the exponent is biased before being stored, by adjusting its value to put it within an unsigned range suitable for comparison. By arranging the fields so that the sign bit is in the most significant bit position, the biased exponent in the middle, then the mantissa in the least significant bits, the resulting value will be ordered properly, whether it's interpreted as a floating point or integer value. This allows high speed comparisons of floating point numbers using fixed point hardware.
I've also found this explanation from wikipedia's article about offset binary:
This has the consequence that the "zero" value is represented by a 1 in the most significant bit and zero in all other bits, and in general the effect is conveniently the same as using two's complement except that the most significant bit is inverted. It also has the consequence that in a logical comparison operation, one gets the same result as with a two's complement numerical comparison operation, whereas, in two's complement notation a logical comparison will agree with two's complement numerical comparison operation if and only if the numbers being compared have the same sign. Otherwise the sense of the comparison will be inverted, with all negative values being taken as being larger than all positive values.
I don't really understand what kind of comparison they are talking about here. Can someone please explain using a simple example?
回答1:
'Comparison' here refers to the usual comparison of numbers by size: 5 > 4, etc. Suppose floating-point numbers were stored with as
[sign bit] [unbiased exponent] [mantissa]
For example, if the exponent is a 2's complement 3-bit binary number and the mantissa is a 4-bit unsigned binary number, you'd have
1 010 1001 = 4.5
1 110 0111 = 0.21875
You can see that the first is bigger than the second, but to figure this out, the computer would have to calculate 1.001 x 2^2
and 0.111 x 2^(-2)
and then compare the resulting floating-point numbers. This is already complex with floating-point hardware, and if there is no such hardware for this computer, then...
So the number is stored as
[sign bit] [biased exponent] [mantissa]
Using the same 3-bit binary number for the exponent (this time biased; see a related question) and unsigned 4-bit mantissa, we have
1 101 1001 = 4.5
1 001 0111 = 0.21875
But now comparison is very easy! You can treat the two numbers as integers 11011001
and 10010111
and see that the first is obviously bigger: obvious even to a computer, as integer comparisons are easy. This is why biased exponents are used.
来源:https://stackoverflow.com/questions/37096796/how-does-exponent-bias-make-comparison-easier