is there a C macro or some kind of way that i can check if my c program was compiled as 64bit or 32bit at compile time in C?
Compiler: GCC Operating systems that i n
GLIBC itself uses this (in inttypes.h
):
#if __WORDSIZE == 64
Here is the correct and portable test which does not assume x86 or anything else:
#include <stdint.h>
#if UINTPTR_MAX == 0xffffffff
/* 32-bit */
#elif UINTPTR_MAX == 0xffffffffffffffff
/* 64-bit */
#else
/* wtf */
#endif
Use a compiler-specific macro.
I don't know what architecture you are targeting, but since you don't specify it, I will assume run-of-the-mill Intel machines, so most likely you are interested in testing for Intel x86 and AMD64.
For example:
#if defined(__i386__)
// IA-32
#elif defined(__x86_64__)
// AMD64
#else
# error Unsupported architecture
#endif
However, I prefer putting these in the separate header and defining my own compiler-neutral macro.
The same program source can (and should be able to) be compiled in 64-bit computers, 32-bit computers, 36-bit computers, ...
So, just by looking at the source, if it is any good, you cannot tell how it will be compiled. If the source is not so good, it may be possible to guess what the programmer assumed would be used to compile it under.
My answer to you is:
There is a way to check the number of bits needed for a source file only for bad programs.
You should strive to make your programs work no matter on how many bits they will be compiled for.
An easy one that will make language lawyer squeem.
if(sizeof (void *) * CHARBIT == 64) {
...
}
else {
...
}
As it is a constant expression an optimizing compiler will drop the test and only put the right code in the executable.
The question is ambiguous because it doesn't specify whether the requirement is for 64-bit pointers or 64-bit native integer arithmetic, or both.
Some other answers have indicated how to detect 64-bit pointers. Even though the question literally stipulates "compiled as", note this does not guarantee a 64-bit address space is available.
For many systems, detecting 64-bit pointers is equivalent to detecting that 64-bit arithmetic is not emulated, but that is not guaranteed for all potential scenarios. For example, although Emscripten emulates memory using Javascript arrays which have a maximum size of 232-1, to provide compatibility for compiling C/C++ code targeting 64-bit, I believe Emscripten is agnostic about the limits (although I haven't tested this). Whereas, regardless of the limits stated by the compiler, Emscripten always uses 32-bit arithmetic. So it appears that Emscripten would take LLVM byte code that targeted 64-bit int
and 64-bit pointers and emulate them to the best of Javascript's ability.
I had originally proposed detecting 64-bit "native" integers as follows, but as Patrick Schlüter pointed out, this only detects the rare case of ILP64:
#include <stdint.h>
#if UINT_MAX >= 0xffffffffffffffff
// 64-bit "native" integers
#endif
So the correct answer is that generally you shouldn't be making any assumptions about the address space or arithmetic efficiency of the nebulous "64-bit" classification based on the values of the limits the compiler reports. Your compiler may support non-portable preprocessor flags for a specific data model or microprocessor architecture, but given the question targets GCC and per the Emscripten scenario (wherein Clang emulates GCC) even these might be misleading (although I haven't tested it).
Generally speaking none of these scenarios can be relied upon to give any reliable indication of whether a 64-bit address space and non-emulated 64-bit arithmetic is available, thus they are basically useless (w.r.t. to said attributes) except in the context of a build system that is not agnostic. Thus for said attributes, it is preferred to set build macros that so the build system can select which variant is compiled.