I have a co-worker that maintains that TRUE used to be defined as 0 and all other values were FALSE. I could swear that every language I\'ve worked with, if you could even
If nothing else, bash shells still use 0 for true, and 1 for false.
I have heard of and used older compilers where true > 0, and false <= 0.
That's one reason you don't want to use if(pointer) or if(number) to check for zero, they might evaluate to false unexpectedly.
Similarly, I've worked on systems where NULL wasn't zero.
System calls in the C standard library typically return -1 on error and 0 on success. Also the Fotran computed if statement would (and probably still does) jump to one of three line numbers depending on the condition evaluating to less than, equal to or greater than zero.
eg: IF (I-15) 10,20,10
would test for the condition of I == 15 jumping to line 20 if true (evaluates to zero) and line 10 otherwise.
Sam is right about the problems of relying on specific knowledge of implementation details.
DOS and exit codes from applications generally use 0 to mean success and non-zero to mean failure of some type!
DOS error codes are 0-255 and when tested using the 'errorlevel' syntax mean anything above or including the specified value, so the following matches 2 and above to the first goto, 1 to the second and 0 (success) to the final one!
IF errorlevel 2 goto CRS
IF errorlevel 1 goto DLR
IF errorlevel 0 goto STR
I remember PL/1 had no boolean class. You could create a bit and assign it the result of a boolean expression. Then, to use it, you had to remember that 1 was false and 0 was true.
Even today, in some languages (Ruby, lisp, ...) 0 is true because everything except nil is true. More often 1 is true. That's a common gotcha and so it's sometimes considered a good practice to not rely on 0 being false, but to do an explicit test. Java requires you do this.
Instead of this
int x;
....
x = 0;
if (x) // might be ambiguous
{
}
Make is explicit
if (0 != x)
{
}