I was wondering why True is equal to -1 and not 1. If I remember correctly (back in the days) in C, \"true\" would be equal to 1.
Dim t, f As Integer
Here is the possible duplicate: Casting a boolean to an integer returns -1 for true?
Boolean constant True has numeric value −1. This is because the Boolean data type is stored as a 16-bit signed integer. In this construct −1 evaluates to 16 binary 1s (the Boolean value True), and 0 as 16 0s (the Boolean value False). This is apparent when performing a Not operation on a 16 bit signed integer value 0 which will return the integer value −1, in other words True = Not False. This inherent functionality becomes especially useful when performing logical operations on the individual bits of an integer such as And, Or, Xor and Not.[4] This definition of True is also consistent with BASIC since the early 1970s Microsoft BASIC implementation and is also related to the characteristics of CPU instructions at the time.
When you cast any non-zero number to a Boolean
, it will evaluate to True
. For instance:
Dim value As Boolean = CBool(-1) ' True
Dim value1 As Boolean = CBool(1) ' True
Dim value2 As Boolean = CBool(0) ' False
However, as you point out, any time you cast a Boolean
that is set to True
to an Integer
, it will evaluate to -1, for instance:
Dim value As Integer = CInt(CBool(1)) ' -1
The reason for this is because -1
is the signed-integer value where all of its bits are equal to 1. Since a Boolean
is stored as a 16-bit integer, it is easier to toggle between true and false states by simply NOT'ing all of the bits rather than only NOT'ing the least significant of the bits. In other words, in order for True
to be 1
, it would have to be stored like this:
True = 0000000000000001
False = 0000000000000000
But it's easier to just store it like this:
True = 1111111111111111
False = 0000000000000000
The reason it's easier is because, at the low-level:
1111111111111111 = NOT(0000000000000000)
Whereas:
0000000000000001 <> NOT(0000000000000000)
0000000000000001 = NOT(1111111111111110)
For instance, you can replicate this behavior using Int16
variables like this:
Dim value As Int16 = 0
Dim value2 As Int16 = Not value
Console.WriteLine(value2) ' -1
This would be more obvious if you were using unsigned integers, because then, the value of True
is the maximum value rather than -1. For instance:
Dim value As UInt16 = CType(True, UInt16) ' 65535
So, the real question, then, is why in the world does VB.NET use 16 bits to store a single bit value. The real reason is speed. Yes, it uses 16 times the amount of memory, but a processor can do 16-bit boolean operations a lot faster than it can do single-bit boolean operations.
Side note: The reason why the Int16
value of -1
is stored as 1111111111111111
instead of as 1000000000000001
, as you might expect (where the first bit would be the "sign bit", and the rest would be the value), is because it is stored as the two's-complement. Storing negative numbers as the two's-complement means that arithmetic operations are much easier for the processor to perform. It's also safer because, with two's-compliment, there's no way to represent 0
as a negative number, which could cause all sorts of confusion and bugs.
Is most language, a numeric value of 0 is false. Everything else is considered true. If I remeber correctly, -1 is actually all bits set to 1 while 0 is all bits set to 0. I guess this is why.
I guess to goes back to assembly language where a conditional is translated to a compare cmp
operation and the zero flag (ZF
) is checked. For true expressions the ZF
is not raised, and for false expressions it is. Early Intel processors are like that, but I cannot remember if the Zilog Z80
and the Motorola 8-bit processors had the same convention.
In Visual Basic, 0
is False
whereas any non-zero value is True
. Also, per MSDN:
When Visual Basic converts numeric data type values to Boolean, 0 becomes False and all other values become True. When Visual Basic converts Boolean values to numeric types, False becomes 0 and True becomes -1.