I just came onto a project with a pretty huge code base.
I\'m mostly dealing with C++ and a lot of the code they write uses double negation for their boolean logic.
It's correct but, in C, pointless here -- 'if' and '&&' would treat the expression the same way without the '!!'.
The reason to do this in C++, I suppose, is that '&&' could be overloaded. But then, so could '!', so it doesn't really guarantee you get a bool, without looking at the code for the types of variable
and api.call
. Maybe someone with more C++ experience could explain; perhaps it's meant as a defense-in-depth sort of measure, not a guarantee.
Legacy C developers had no Boolean type, so they often #define TRUE 1
and #define FALSE 0
and then used arbitrary numeric data types for Boolean comparisons. Now that we have bool
, many compilers will emit warnings when certain types of assignments and comparisons are made using a mixture of numeric types and Boolean types. These two usages will eventually collide when working with legacy code.
To work around this problem, some developers use the following Boolean identity: !num_value
returns bool true
if num_value == 0
; false
otherwise. !!num_value
returns bool false
if num_value == 0
; true
otherwise. The single negation is sufficient to convert num_value
to bool
; however, the double negation is necessary to restore the original sense of the Boolean expression.
This pattern is known as an idiom, i.e., something commonly used by people familiar with the language. Therefore, I don't see it as an anti-pattern, as much as I would static_cast<bool>(num_value)
. The cast might very well give the correct results, but some compilers then emit a performance warning, so you still have to address that.
The other way to address this is to say, (num_value != FALSE)
. I'm okay with that too, but all in all, !!num_value
is far less verbose, may be clearer, and is not confusing the second time you see it.