I occasionally write code something like this:
// file1.cpp
#define DO_THIS 1
#if DO_THIS
// stuff
#endif
During the code development I ma
Again, as it often happens, the answer to the "why" question is just: it was done that way because some time ago it was decided to do it this way. When you use an undefined macro in an #if
it is substituted with 0. You want to know whether it is actually defined - use defined()
directive.
There some interesting benefits to that "default to 0" approach though. Especially when you are using macros that might be defined by the platform, not your own macros.
For example, some platforms offer macros __BYTE_ORDER
, __LITTLE_ENDIAN
and __BIG_ENDIAN
to determine their endianness. You could write preprocessor directive like
#if __BYTE_ORDER == __LITTLE_ENDIAN
/* whatever */
#else
/* whatever */
#endif
But if you try to compile this code on a platform that does not define these non-standard macros at all (i.e. knows nothing about them), the above code will be translated by preprocessor into
#if 0 == 0
...
and the little-endian version of the code will be compiled "by default". If you wrote the original #if
as
#if __BYTE_ORDER == __BIG_ENDIAN
...
then the big-endian version of the code would be compiled "by default".
I can't say that #if
was defined as it was specifically for tricks like the above, but it comes useful at times.