Why do we bias the exponent of a floating-point number?
I'm trying to wrap my head around this floating point representation of binary numbers, but I couldn't find, no matter where I looked, a good answer to the question. Why is the exponent biased? What's wrong with the good old reliable two's complement method? I tried to look at the Wikipedia's article regarding the topic, but all it says is: "the usual representation for signed values, would make comparison harder." The IEEE 754 encodings have a convenient property that an order comparison can be performed between two positive non-NaN numbers by simply comparing the corresponding bit strings