I know, the question seems to be strange. Programmers sometimes think too much. Please read on...
In C I use signed
and unsigned
integers a
(As an aside, Perl 6 lets you write
subset Nonnegative::Float of Float where { $_ >= 0 };
and then you can use Nonnegative::Float
just like you would any other type.)
There's no hardware support for unsigned floating point operations, so C doesn't offer it. C is mostly designed to be "portable assembly", that is, as close to the metal as you can be without being tied down to a specific platform.
[edit]
C is like assembly: what you see is exactly what you get. An implicit "I'll check that this float is nonnegative for you" goes against its design philosophy. If you really want it, you can add assert(x >= 0)
or similar, but you have to do that explicitly.
I think the main reason is that unsigned floats would have really limited uses compared to unsigned ints. I don't buy the argument that it's because the hardware doesn't support it. Older processors had no floating point capabilities at all, it was all emulated in software. If unsigned floats were useful they would have been implemented in software first and the hardware would have followed suit.
IHMO it's because supporting both signed and unsigned floating-point types in either hardware or software would be too troublesome
For integer types we can utilize the same logic unit for both signed and unsigned integer operations in most situations using the nice property of 2's complement, because the result is identical in those cases for add, sub, non-widening mul and most bitwise operations. For operations that differentiate between signed and unsigned version we can still share the majority of the logic. For example
Systems that use 1's complement also just need a little modification to the logic because it's simply the carry bit wrapped around to the least significant bit. Not sure about sign-magnitude systems but it seems like they use 1's complement internally so the same thing applies
Unfortunately we don't that luxury for floating-point types. By simply freeing the sign bit we'll have the unsigned version. But then what should we use that bit for?
But both choices need a bigger adder to accommodate for the wider value range. That increases the complexity of the logic while the adder's top bit sits there unused most of the time. Even more circuitry will be needed for multiplications, divisions or other complex operations
On systems that use software floating-point you need 2 versions for each function which wasn't expected during the time memory was so much expensive, or you'd have to find some "tricky" way to share parts of the signed and unsigned functions
However floating-point hardware existed long before C was invented, so I believe the choice in C was due to the lack of hardware support because of the reason I mentioned above
That said, there exists several specialized unsigned floating-point formats, mainly for image processing purposes, like Khronos group's 10 and 11-bit floating-point type
I suspect it is because the underlying processors targeted by C compilers don't have a good way of dealing with unsigned floating point numbers.
I believe the unsigned int was created because of the need for a larger value margin than the signed int could offer.
A float has a much larger margin, so there was never a 'physical' need for an unsigned float. And as you point out yourself in your question, the additional 1 bit precision is nothing to kill for.
Edit: After reading the answer by Brian R. Bondy, I have to modify my answer: He is definitely right that the underlying CPUs did not have unsigned float operations. However, I maintain my belief that this was a design decision based on the reasons I stated above ;-)
Why C++ doesn't have support for unsigned floats is because there is no equivalent machine code operations for the CPU to execute. So it would be very inefficient to support it.
If C++ did support it, then you would be sometimes using an unsigned float and not realizing that your performance has just been killed. If C++ supported it then every floating point operation would need to be checked to see if it is signed or not. And for programs that do millions of floating point operations, this is not acceptable.
So the question would be why don't hardware implementers support it. And I think the answer to that is that there was no unsigned float standard defined originally. Since languages like to be backwards compatible, even if it were added languages couldn't make use of it. To see the floating point spec you should look at the IEEE standard 754 Floating-Point.
You can get around not having an unsigned floating point type though by creating a unsigned float class that encapsulates a float or double and throws warnings if you try to pass in a negative number. This is less efficient, but probably if you aren't using them intensely you won't care about that slight performance loss.
I definitely see the usefulness of having an unsigned float. But C/C++ tends to chose efficiency that works best for everyone over safety.