What's the purpose of looping “xorl

前端 未结 1 604
野的像风
野的像风 2021-01-19 06:39

I have the following x86 assembly code:

  movl   8(%ebp), %edx  //get an argument from the caller
  movl   $0, %eax
  testl  %edx, %edx
  je     .L1                  


        
相关标签:
1条回答
  • 2021-01-19 07:21

    It looks like the purpose of the whole loop is to XOR all the bits together in the 32-bit arg. i.e. calculate the parity.

    Working backwards from the last instruction (and $1,%eax), we know that only the low bit of the result matters.

    With that in mind, the xor %edx,%eax becomes clearer: xor the current low bit of %edx into %eax. The high garbage doesn't matter.

    The shr loops until all of x's bits have been shifted out. We could always loop 32 times to get all the bits, but that would be less efficient than stopping once x is 0. (Because of how XOR works, we don't need to actual XOR in the 0 bits; that has no effect.)


    Once we know what the function does, filling in the C becomes an exercise in clever / compact C syntax. I thought at first that y ^= (x>>=1); would fit inside the loop, but that shifts x before using it the first time.

    The only way I see to do it in one C statement is with the , operator (which does introduce a sequence point, so it's safe to read x on the left side and modify it on the right side of a ,). So, y ^= x, x>>=1; fits.

    Or, for more readable code, just cheat and put two statements on the same line with a ;.

    int f1(unsigned x) {
        int y = 0;
        while(x != 0) {
            y ^= x;  x>>=1;      
        }
        return y & 1;
     }
    

    This compiles to essentially the same asm as shown in the question, using gcc5.3 -O3 on the Godbolt compiler explorer. The question's code de-optimizes the xor-zeroing idiom to a mov $0, %eax, and optimizes gcc's silly duplication of ret instructions. (Or maybe used an earlier version of gcc that didn't do that.)


    The loop is very inefficient: this is an efficient way:

    We don't need a loop with O(n) complexity (where n is the width in bits of x). Instead, we can get O(log2(n)) complexity, and actually take advantage of x86 tricks to only do the first 2 steps of that.

    I've left off the operand-size suffix for instructions where it's determined by the registers. (Except for xorw to make the 16-bit xor explicit.)

    #untested
    parity:
        # no frame-pointer boilerplate
    
        xor       %eax,%eax        # zero eax (so the upper 24 bits of the int return value are zeroed).  And yes, this is more efficient than mov $0, %eax
                                   # so when we set %al later, the whole of %eax will be good.
    
        movzwl    4(%esp), %edx      # load low 16 bits of `x`.  (zero-extend into the full %edx is for efficiency.  movw 4(%esp), %dx would work too.
        xorw      6(%esp), %dx       # xor the high 16 bits of `x`
        # Two loads instead of a load + copy + shift is probably a win, because cache is fast.
        xor       %dh, %dl           # xor the two 8 bit halves, setting PF according to the result
        setnp      %al               # get the inverse of the CPU's parity flag.  Remember that the rest of %eax is already zero, so the result is already zero-extended to 32-bits (int return value)
        ret
    

    Yes, that's right, x86 has a parity flag (PF) that's updated from the low 8 bits of the result of every instruction that "sets flags according to the result", like xor.

    We use the np condition because PF = 1 means even parity: xor of all bits = 0. We need the inverse to return 0 for even parity.

    To take advantage of it, we do a SIMD-style horizontal reduction by bringing the high half down to the low half and combining, repeating twice to reduce 32 bits to 8 bits.

    Zeroing eax (with an xor) before the instruction that sets flags is slightly more efficient than doing set-flags / setp %al / movzbl %al, %eax, as I explained in What is the best way to set a register to zero in x86 assembly: xor, mov or and?.


    Or, as @EOF points out, if the CPUID POPCNT feature bit is set, you can use popcnt and test the low bit to see if the number of set bits is even or odd. (Another way to look at this: xor is add-without-carry, so the low bit is the same whether you xor all the bits together or add all the bits together horizontally).

    GNU C also has __builtin_parity and __builtin_popcnt which use the hardware instruction if you tell the compiler that the compile target supports it (with -march=... or -mpopcnt), but otherwise compile to an efficient sequence for the target machine. The Intel intrinsics always compile to the machine instruction, not a fallback sequence, and it's a compile-time error to use them without the appropriate -mpopcnt target option.

    Unfortunately gcc doesn't recognize the pure-C loop as being a parity calculation and optimize it into this. Some compilers (like clang and probably gcc) can recognize some kinds of popcount idioms, and optimize them into the popcnt instruction, but that kind of pattern recognition doesn't happen in this case. :(

    See these on godbolt.

    int parity_gnuc(unsigned x) {
        return  __builtin_parity(x);
    }
        # with -mpopcnt, compiles the same as below
        # without popcnt, compiles to the same upper/lower half XOR algorithm I used, and a setnp
        # using one load and mov/shift for the 32->16 step, and still %dh, %dl for the 16->8 step.
    
    #ifdef __POPCNT__
    #include <immintrin.h>
    int parity_popcnt(unsigned x) {
        return  _mm_popcnt_u32(x) & 1;
    }
    #endif
    
        # gcc does compile this to the optimal code:
        popcnt    4(%esp), %eax
        and       $1, %eax
        ret
    

    See also other links in the x86 tag wiki.

    0 讨论(0)
提交回复
热议问题