very simple question, I read that GCC supports long long int type. But how can make math operations with it, when CPU is 32 bit wide only?
Saying an architecture is 32 bit (or 64 or whatever) is usually only an approximation of what the processor is capable of. Usually you only refer to the width of pointers with that number, arithmetic might be quite different. E.g the x86 architecture has 32 bit pointers, most arithmetic is performed in 32 bit registers, but it also has native support for some basic 64 bit operations.
Also you shouldn't follow the impression that the standard integer types have some prescribed width. In particular long long is at least 64 bit but may be wider. Use the typedefs int32_t, int64_t if you want to be portably sure about the width.
If you want to know what gcc (or any other compiler) does with long long you have to look into the specification for your particular target platform
It's easy enough to just compile and test if you have a 32-bit system accessible. gcc has a flag -S
which turns on assembly language output. Here's what it produces on my 32-bit intel:
// read two long longs from stack into eax:edx and ecx:ebx
movl 32(%esp), %eax
movl 36(%esp), %edx
movl 24(%esp), %ecx
movl 28(%esp), %ebx
// a+b
addl %ecx, %eax
adcl %ebx, %edx
// a-b
subl %ecx, %eax
sbbl %ebx, %edx
// etc
Internally, the type is represented by a high-word and a low-word, like:
struct long
{
int32 highWord;
uint32_t lowWord;
}
The compiler needs to know if it is a 32bit or 64bit environment and then selects the right reprenstations of the number - if it is 64bit, it can be done natively, if it is 32bit, the compiler has to take care of the math between the high/lowword.
If you have a look in math.h, you can see the functions used for this, and use them yourself. On an additional note, be aware of the difference between little-endian and big-endian (see wiki), the usage depends on the operating system.
Most likely as a class, not natively. same way any compiler can/could support any large number set.
The compiler will synthesize math operations (or use function calls) that use more than one CPU instruction to perform the operation. For example, an add operation will add the low order components (the low words) of the long long
values and will then take the carry out of that operation and feed it into an add operation on the high order words of the long long
.
So the following C code:
long long a;
long long b;
long long c;
// ...
c = a + b;
might be represented by an instruction sequence that looks something like:
mov eax, [a.low] ; add the low order words
add eax, [b.low]
mov edx, [a.high] ; add the high order words,
adc edx, [b.high] ; including the carry
mov [c.low], eax
mov [c.high], edx
And if you consider for a moment, compilers for 8 and 16 bits systems had to do this type of thing for 16 and/or 32-bit values long before long long
came into being.