问题
When I am down to squeezing the last bit of performance out of a kernel, I usually find that replacing the logical operators (&&
and ||
) with bitwise operators (&
and |
) makes the kernel a little bit faster. This was observed by looking at the kernel time summary in CUDA Visual Profiler.
So, why are bitwise operators faster than logical operators in CUDA? I must admit that they are not always faster, but a lot of times they are. I wonder what magic can give this speedup.
Disclaimer: I am aware that logical operators short-circuit and bitwise operators do not. I am well aware of how these operators can be misused resulting in wrong code. I use this replacement with care only when the resulting logic remains the same, there is a speedup and the speedup thus obtained matters to me :-)
回答1:
Logical operators will often result in branches, particularly when the rules of short circuit evaluation need to be observed. For normal CPUs this can mean branch misprediction and for CUDA it can mean warp divergence. Bitwise operations do not require short circuit evaluation so the code flow is linear (i.e. branchless).
回答2:
A && B:
if (!A) {
return 0;
}
if (!B) {
return 0;
}
return 1;
A & B:
return A & B;
These are the semantics considering that evaluating A and B can have side effects (they can be functions that alter the state of the system when evaluated).
There are many ways that the compiler can optimize the A && B
case, depending on the types of A and B and the context.
回答3:
Bitwise operations can be carried out in registers at hardware level. Register operations are the fastest, this is specially true when the data can fit in the register. Logical operations involve expression evaluation which may not be register bound. Typically &, |, ^, >>... are some of the fastest operations and used widely in high performance logic.
来源:https://stackoverflow.com/questions/9906774/cuda-why-are-bitwise-operators-sometimes-faster-than-logical-operators