The C standard is quite unclear about the uint_fast*_t
family of types. On a gcc-4.4.4 linux x86_64 system, the types uint_fast16_t
and uint_
Just because I was curious about the fast integer types I benchmarked a real-life parser which, in its semantic part, used an integer type for indexing arrays and C++-containers. It performed a mix of operations rather than a simple loop and most of the program did not depend on the integer type chosen. Actually, for my particular data, any integer type would have been fine. So all versions produced the same output.
At assembly level there are 8 cases: four for the sizes and 2 for the signedness. The 24 ISO C type names must be mapped to the eight basic types. As Jens already stated a "good" mapping must consider the particular processor and the particular code. Therefore, in practice, we should not expect perfect results even though the compiler writers should know the generated code.
Many runs of the example were averaged so that the fluctuation range of the run time is just about 2 of the least given digit. For this particular setup the results were:
int16_t
/ uint16_t
and int64_t
/ uint64_t
respectively.int8_t
/ uint8_t
and int32_t
/ uint32_t
respectively.Compiler: g++ 4.9.1, Options: -O3 mtune=generic -march=x86-64
CPU: Intel™ Core™ 2 Duo E8400 @ 3.00GHz
The mapping
| |Integer| | |Sign|Size | Types | | |[bits] | | |:--:|------:|:-------------------------------------------------------------------:| | u | 8 | uint8_t uint_fast8_t uint_least8_t | | s | 8 | int8_t int_fast8_t int_least8_t | | u | 16 | uint16_t uint_least16_t | | s | 16 | int16_t int_least16_t | | u | 32 | uint32_t uint_least32_t | | s | 32 | int32_t int_least32_t | | u | 64 | uint64_t uint_fast16_t uint_fast32_t uint_fast64_t uint_least64_t | | s | 64 | int64_t int_fast16_t int_fast32_t int_fast64_t int_least64_t |
Sizes and Timings
| | Integer | | | | | | Sign | Size | text | data | bss | Time | | | [bits] | [bytes] |[bytes]|[bytes]| [ms] | |:----:|--------:|--------:| -----:|------:|--------:| | u | 8 | 1285801 | 3024 | 5704 | 407.61 | | s | 8 | 1285929 | 3032 | 5704 | 412.39 | | u | 16 | 1285833 | 3024 | 5704 | 408.81 | | s | 16 | 1286105 | 3040 | 5704 | 408.80 | | u | 32 | 1285609 | 3024 | 5704 | 406.78 | | s | 32 | 1285921 | 3032 | 5704 | 413.30 | | u | 64 | 1285557 | 3032 | 5704 | 410.12 | | s | 64 | 1285824 | 3048 | 5704 | 410.13 |
Yes, I think this is simply a mistake. Unfortunately you can't just go fixing mistakes like this without breaking the ABI, but it may not matter since virtually nobody (and certainly no library functions I know of) actually uses the *int_fast*_t
types.
I think that such a design decision is not simple to take. It depends on many factors. For the moment I don't take your experiment as conclusive, see below.
First of all there is no such thing like one single concept of what fast should mean. Here you emphasized on multiplication in place, which is just one particular point of view.
Then x86_64 is an architecture and not a processor. So outcomes might be quite different for different processors in that family. I don't think that it would be sane that gcc would have the type decision depend on particular commandline switches that optimize for a given processor.
Now to come back to your example. I guess you have also looked at the assembler code? Did it e.g use SSE instructions to realize your code? Did you switch processor specific options on, something like -march=native
?
Edit: I experimented a bit with your test program and if I leave it exactly as it is I can basically reproduce your measurements. But modifying and playing around with it I am even less convinced that it is conclusive.
E.g if I change the inner loop also to go downward, the assembler looks almost the same as before (but using decrement and a test against 0) but the execution takes about 50% more. So I guess the timing depends very much on the environment of the instruction that you want to benchmark, pipeline stalls, whatever. You'd have to bench codes of very different nature where the instructions are issued in different contexts, alignment problems and vectorization come to play, to make a decision what the appropriate types for the fast
typedef
s are.
Actual performance at runtime is a very very complicated topic. With many factors ranging from Ram memory, hard-disks, OS'es; And the many processor specific quirks. But this will give you a rough run down for you:
N_fastX_t
N_leastX_t
The Multiplication problem?
Also to answer why the larger fastX variable would be slower in multiplication. Is cause due to the very nature of multiplication. (being similar to what you were thought in school)
http://en.wikipedia.org/wiki/Binary_multiplier
//Assuming 4bit int
0011 (3 in decimal)
x 0101 (5 in decimal)
======
0011 ("0011 x 0001")
0000- ("0011 x 0000")
0011-- ("0011 x 0001")
0000--- ("0011 x 0000")
=======
1111 (15 in decimal)
However it is important to know that a computer is a "logical idiot". While its obvious to us humans to skip the trailing zeros step. The computer will still work it out (its cheaper then checking conditionally then working it out anyway). Hence this creates a quirk for a larger size variable of the same value
//Assuming 8bit int
0000 0011 (3 in decimal)
x 0000 0101 (5 in decimal)
===========
0000 0011 ("0011 x 0001")
0 0000 000- ("0011 x 0000")
00 0000 11-- ("0011 x 0001")
000 0000 0--- ("0011 x 0000")
0000 0000 ---- (And the remainders of zeros)
-------------- (Will all be worked out)
==============
0000 1111 (15 in decimal)
While i did not spam out the remainder 0x0 additions in the multiplication process. It is important to note that the computer will "get them done". And hence it is natural that a larger variable multiplication will take longer time then its smaller counterpart. (Hence its always good to avoid multiplication and divisions whenever possible).
However here comes the 2nd quirk. It may not apply to all processors. It is important to note that all CPU operations are counted in CPU cycles. In which in each cycle dozens (or more) of such small additions operations is performed as seen above. As a result, a 8bit addition may take the same amount of time as an 8bit multiplication, and etc. Due to the various optimizations and CPU specific quirks.
If it concerns you that much. Go refer to Intel : http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html
Additional mention about CPU vs RAM
As CPU have advance to moore's law to be several times faster then your DDR3 RAM.
This can result to situations where more time is spent looking up the variable from the ram then to CPU "compute" it. This is most prominent in long pointer chains.
So while a CPU cache exists on most processor to reduce "RAM look-up" time. Its uses is limited to specific cases (where cache line benefits the most). And for cases when it does not fit. Note that the RAM look-up time > CPU processing time (excluding multiplication/divisions/some quirks)
AFAIK compilers only define their own versions of (u)int_(fast/least)XX_t
types if these are not already defined by the system. That is because it is very important that these types are equally defined across all libraries/binaries on a single system. Otherwise, if different compilers would define those types differently, a library built with CompilerA may have a different uint_fast32_t
type than a binary built with CompilerB, yet this binary may still link against the library; there is no formal standard requirement that all executable code of a system has to be built by the same compiler (actually on some systems, e.g. Windows, it is rather common that code has been compiled by all kind of different compilers). If now this binary calls a function of the library, things will break!
So the question is: Is it really GCC defining uint_fast16_t here, or is it actually Linux (I mean the kernel here) or maybe even the Standard C Lib (glibc in most cases), that defines those types? Since if Linux or glibc defines these, GCC built on that system has no choice other than to adopt whatever conventions these have established. Same is true for all other variable width types, too: char
, short
, int
, long
, long long
; all these types have only a minimum guaranteed bit width in the C Standard (and for int
it is actually 16 bit, so on platforms where int
is 32 bit, it is already much bigger than would be required by the standard).
Other than that, I actually wonder what is wrong with your CPU/compiler/system. On my system 64 bit multiplication is equally fast to 32 bit multiplication. I modified your code to test 16, 32, and 64 bit:
#include <time.h>
#include <stdio.h>
#include <inttypes.h>
#define RUNS 100000
#define TEST(type) \
static type test ## type () \
{ \
int count; \
type p, x; \
\
p = 1; \
for (count = RUNS; count != 0; count--) { \
for (x = 1; x != 50000; x++) { \
p *= x; \
} \
} \
return p; \
}
TEST(uint16_t)
TEST(uint32_t)
TEST(uint64_t)
#define CLOCK_TO_SEC(clock) ((double)clockTime / CLOCKS_PER_SEC)
#define RUN_TEST(type) \
{ \
clock_t clockTime; \
unsigned long long result; \
\
clockTime = clock(); \
result = test ## type (); \
clockTime = clock() - clockTime; \
printf("Test %s took %2.4f s. (%llu)\n", \
#type, CLOCK_TO_SEC(clockTime), result \
); \
}
int main ()
{
RUN_TEST(uint16_t)
RUN_TEST(uint32_t)
RUN_TEST(uint64_t)
return 0;
}
Using unoptimized code (-O0), I get:
Test uint16_t took 13.6286 s. (0)
Test uint32_t took 12.5881 s. (0)
Test uint64_t took 12.6006 s. (0)
Using optimized code (-O3), I get:
Test uint16_t took 13.6385 s. (0)
Test uint32_t took 4.5455 s. (0)
Test uint64_t took 4.5382 s. (0)
The second output is quite interesting. @R.. wrote in a comment above:
On x86_64, 32-bit arithmetic should never be slower than 64-bit arithmetic, period.
The second output shows that the same thing cannot be said for 32/16 bit arithmetic. 16 bit arithmetic can be significantly slower on a 32/64 bit CPU, even though my x86 CPU can natively perform 16 bit arithmetic; unlike some other CPUs, like a PPC, for example, that can only perform 32 bit arithmetic. However, this only seems to apply to multiplication on my CPU, when changing the code to do addition/subtraction/division, there is no significant difference between 16 and 32 bit any longer.
The results above are from an Intel Core i7 (2.66 GHz), yet if anyone is interested, I can run this benchmark also on an Intel Core 2 Duo (one CPU generation older) and on an Motorola PowerPC G4.