问题
In this wiki article it shows 23 bits for precision, 8 for exponent, and 1 for sign
Where is the hidden 24th bit in float type that makes (23+1) for 7 significand digits?
回答1:
Floating point numbers are usually normalized. Consider, for example, scientific notation as most of us learned it in school. You always scale the exponent so there's exactly one digit before the decimal point. For example, instead of 123.456, you write 1.23456x102.
Floating point on a computer is normally handled (almost1) the same way: numbers are normalized so there's exactly one digit before the binary point (binary point since most work in binary instead of decimal). There's one difference though: in the case of binary, that means the digit before the decimal point must be a 1
. Since it's always a 1
, there's no real need to store that bit. To save a bit of storage in each floating point number, that 1
bit is implicit instead of being stored.
As usual, there's just a bit more to the situation than that though. The main difference is denormalized numbers. Consider, for example, if you were doing scientific notation but you could only use exponents from -99 to +99. If you wanted to store a number like, say, 1.234*10-102, you wouldn't be able to do that directly, so it would probably just get rounded down to 0.
Denormalized numbers give you a way to deal with that. Using a denormalized number, you'd store that as 0.001234*10-99. Assuming (as is normally the case on a computer) that the number of digits for the mantissa and exponent are each limited, this loses some precision, but still avoids throwing away all the precision and just calling it 0
.
1 Technically, there are differences, but they make no difference to the basic understanding involved.
回答2:
http://en.wikipedia.org/wiki/Single_precision_floating-point_format#IEEE_754_single_precision_binary_floating-point_format:_binary32
The true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit (to the left of the binary point) with value 1 unless the exponent is stored with all zeros
Explains it pretty well, it is by convention/design that last bit is not stored explicitly but rather stated by specification that it is there unless everything is 0'os.
回答3:
As you write, the single-precision floating-point format has a sign bit, eight exponent bits, and 23 significand bits. Let s be the sign bit, e be the exponent bits, and f be the significand bits. Here is what various combinations of bits stand for:
If e and f are zero, the object is +0 or -0, according to whether s is 0 or 1.
If e is zero and f is not, the object is (-1)s * 21-127 * 0.f. "0.f" means to write 0, period, and the 23 bits of f, then interpret that as a binary numeral. E.g., 0.011000... is 3/8. These are the "subnormal" numbers.
If 0 < e < 255, the object is (-1)s * 2e-127 * 1.f. "1.f" is similar to "0.f" above, except you start with 1 instead of 0. This is the implicit bit. Most of the floating-point numbers are in this format; these are the "normal" numbers.
If e is 255 and f is zero, the object is +infinity or -infinity, according to whether s is 0 or 1.
If e is 255 and f is not zero, the object is a NaN (Not a Number). The meaning of the f field of a NaN is implementation dependent; it is not fully specified by the standard. Commonly, if the first bit is zero, it is a signaling NaN; otherwise it is a quiet NaN.
来源:https://stackoverflow.com/questions/6302172/float-precision-bits