问题
The ISO C standard allows three encoding methods for signed integers: two's complement, one's complement and sign/magnitude.
What's an efficient or good way to detect the encoding at runtime (or some other time if there's a better solution)? I want to know this so I can optimise a bignum library for the different possibilities.
I plan on calculating this and storing it in a variable each time the program runs so it doesn't have to be blindingly fast - I'm assuming the encoding won't change during the program run :-)
回答1:
You just have to check the low order bits of the constant -1
with something like -1 & 3
. This evaluates to
- for sign and magnitude,
- for one's complement and
- for two's complement.
This should even be possible to do in a preprocessor expression inside #if #else
constructs.
回答2:
Detecting one's complement should be pretty simple -- something like if (-x == ~x)
. Detecting two's complement should be just about as easy: if (-x == ~x + 1)
. If it's neither of those, then it must be sign/magnitude.
回答3:
Why not do it at compile time? You could have the build scripts/makefile compile a test program if need be, but then use the preprocessor to do conditional compilation. This also means performance is much less important, because it only runs once per compile, rather than once per run.
回答4:
Get a pointer to to an int that would show a distinctive bit-pattern. Cast it as a pointer to unsigned int and then examine the bit values.
Doing this with a couple of carefully chosen values should do what you want.
回答5:
I guess you'd store a negative number as an int
into a char
array large enough to hold it and compare the array with the various representations to find out.
But uhm... unsigned integers should not have a sign, do they ?
来源:https://stackoverflow.com/questions/3819250/how-to-detect-encodings-on-signed-integers-in-c