The numeric_limits traits is supposed to be a general way of obtaining various type infomation, to be able to do things like
template
T min(con
The behaviour of min() isn't all that strange, it returns FLT_MIN
, DBL_MIN
or INT_MIN
(or their respective values), depending on the type you specialize with. So your question should be why FLT_MIN
and DBL_MIN
are defined differently from INT_MIN
.
Unfortunately, I don't know the answer to that latter question.
My suspicion is that it was defined that way for practical purposes. For integer numbers, you're usually concerned with overflow/underflow, where the minimum and maximum value become of interest.
For floating point numbers, there exists a different kind of underflow in that a calculation could result in a value that's larger than zero, but smaller than the smallest representable decimal for that floating point type. Knowing that smallest representable floating point value allows you to work around the issue. See also the Wikipedia article on subnormal/denormal numbers.