The problem:
I\'m trying to figure out how to write a code (C preffered, ASM only if there is no other solution) that would make the branch pred
The easiest way to avoid compiler optimizations is to have void f(void) { }
and void g(void) { }
dummy functions in another Translation Unit, and have link-time optimizations disabled. This will force if (*++p) f(); else g();
to be a real unpredictable branch, assuming that p
points to an array of random booleans (This sidesteps the branch prediction problem inside rand()
- just do that before the measurement)
If a for(;;)
loop gives you problems, just throw in a goto
.
Note that the "loop unrolling trick" in the comment is somewhat misleading. You're essentially creating thousands of branches. Each branch would be individually predicted, except that it's likely none of them will be predicted as the CPU simply cannot hold thousands of distinct predictions. This may or may not be a benefit for your real goal.
If you know how the branch predictor works you can get to 100% misprediction. Just take the expected prediction of the predictor each time and do the opposite. The problem is that we don't know how it is implemented.
I have read that typical predictors are able to predict patters such as 0,1,0,1
and so on. But I'm sure there is a limit to how long the pattern can be. My suggestion would be to try each and every pattern of a given length (such as 4) and see which one comes closest to your target percentage. You should be able to target both 50% and 100% and come very close. This profiling needs to be done for each platform once or at runtime.
I doubt that 3% of the total number of branches are in system code like you said. The kernel does not take 3% overhead on purely CPU bound user code. Increase the scheduling priority to the maximum.
You can take the RNG out of the game by generating random data once and iterating over the same data many times. The branch predictor is unlikely to detect this (although it clearly could).
I would implement this by filling a bool[1 << 20]
with a zero-one pattern like I described. Then, you can run the following loop over it many times:
int sum0 = 0, sum1 = 0;
for (...) {
//unroll this a lot
if (array[i]) sum0++;
else sum1++;
}
//print both sums here to make sure the computation is not being optimized out
You'll need to examine the disassembly to make sure that the compiler did not do anything clever.
I don't see why the complicated setup that you have right now is necessary. The RNG can be taken out of the question and I don't see why more than this simple loop is needed. If the compiler is playing tricks you might need to mark the variables as volatile
which makes the compiler (better: most compilers) treat them as if they were external function calls.
Since the RNG now no longer matters since it is almost never called you can even invoke the cryptographic RNG of your OS to get numbers that are indistinguishable (to any human) from true random numbers.
Fill an array with bytes, and write a loop that checks each byte and branches depending on the value of the byte.
Now examine the architecture of your processor and its branch prediction very carefully. Fill the initial bytes of the array so that after examining them, the processor is in a predictable known state. From that known state, you can find out whether the next branch is predicted taken or not. Set the next byte so that the prediction is wrong. Again, find out whether the next branch is predicted taken or not, and set the next byte so that the prediction is wrong and so on.
If you disable interrupts as well (which could change the branch prediction), you can come close to 100% mispredicted branches.
As a simple case, on an old PowerPC processor with strong/weak prediction, after three taken branches it will always be in the state "strong taken" and one branch not taken changes it to "weak taken". If you now have a sequence of alternating not taken / taken branches, the prediction is always wrong and switches between weak not taken and weak taken.
This will of course only work with that particular processor. Most modern processors would see that sequence as almost 100% predictable. For example, they might use two separate predictors; one for the case "last branch was taken" and one for the case "last branch was not taken". But for such a processor, a different sequence of bytes will give the same 100% misprediction rate.