Fastest way to expand bits in a field to all (overlapping + adjacent) set bits in a mask?

前端 未结 2 775
盖世英雄少女心
盖世英雄少女心 2021-02-19 20:47

Say I have 2 binary inputs named IN and MASK. Actual field size could be 32 to 256 bits depending on what instruction set is used to accomplish the task. Both inputs change ever

相关标签:
2条回答
  • 2021-02-19 21:15

    I guess @fuz comment was on the right track. The following example shows how the SSE and AVX2 code below works. The algorithm starts with IN_reduced = IN & MASK because we are not interested in IN bits at positions where MASK is 0.

    IN                                  = . . . 0 0 0 0 . . . . p q r s . . .
    MASK                                = . . 0 1 1 1 1 0 . . 0 1 1 1 1 0 . . 
    IN_reduced = IN & MASK              = . . 0 0 0 0 0 0 . . 0 p q r s 0 . .
    

    If any of the p q r s bits is 1, then IN_reduced + MASK has a carry bit 1 at position X, which is right left to the requested contiguous bits.

    MASK                                = . . 0 1 1 1 1 0 . . 0 1 1 1 1 0 . . 
    IN_reduced                          = . . 0 0 0 0 0 0 . . 0 p q r s 0 . .
    IN_reduced + MASK                   = . . 0 1 1 1 1 . . . 1 . . . . . .
                                                              X
    (IN_reduced + MASK) >>1             = . . . 0 1 1 1 1 . . . 1 . . . . . .
    

    With >> 1 this carry bit 1 is shifted to the same column as bit p (the first bit of the contiguous bits). Now, (IN_reduced + MASK) >>1 is actually an average of IN_reduced and MASK. In order to avoid possible overflow of addition we use the following average: avg(a, b) = (a & b) + ((a ^ b) >> 1) (See @Harold's comment, see also here and here.) With average = avg(IN_reduced, MASK) we get

    MASK                                = . . 0 1 1 1 1 0 . . 0 1 1 1 1 0 . . 
    IN_reduced                          = . . 0 0 0 0 0 0 . . 0 p q r s 0 . .
    average                             = . . . 0 1 1 1 1 . . . 1 . . . . . .
    MASK >> 1                           = . . . 0 1 1 1 1 0 . . 0 1 1 1 1 0 .  
    leading_bits = (~(MASK>>1))&average = . . . 0 0 0 0 0 . . . 1 0 0 0 0 . .  
    

    We can isolate the leading carry bits with leading_bits = (~(MASK>>1) ) & average because MASK>>1 is zero at the positions of the carry bits that we are interested in.

    With normal addition the carry propagates from right to left. Here we use a reverse addition: with a carry from left to right. Reverse adding MASK and leading_bits: rev_added = bit_swap(bit_swap(MASK) + bit_swap(leading_bits)), This zeros the bits at the wanted positions. With OUT = (~rev_added) & MASK we get the result.

    MASK                                = . . 0 1 1 1 1 0 . . 0 1 1 1 1 0 . . 
    leading_bits                        = . . . 0 0 0 0 0 . . . 1 0 0 0 0 . .  
    rev_added (MASK,leading_bits)       = . . . 1 1 1 1 0 . . . 0 0 0 0 1 . .
    OUT = ~rev_added & MASK             = . . 0 0 0 0 0 0 . . . 1 1 1 1 0 . .
    

    The algorithm was not thoroughly tested, but the output looks ok.


    The code block below contains two separate codes: The upper half is the SSE code, and the lower half is the AVX2 code. (In order to avoid bloating the answer too much with two large code blocks.) The SSE algorithm works with 2 x 64-bit elements and the AVX2 version works with 4 x 64-bit elements.

    With gcc 9.1, the algorithm compiles to about 29 instructions, aside from 4 vmovdqa-s for loading some constants, which are likely hoisted out of the loop in a real world application (after inlining). These 29 instructions are a good mix of 9 shuffles (vpshufb) that execute on port 5 (p5) on Intel Skylake, and many other instructions that often may execute on p0, p1 or p5.

    Therefore, a performance of about 3 instructions per cycle might be possible. In that case the throughput would be about 1 function call (inlined) per 10 cycles. In the AVX2 case this means 4 uint64_t OUT results per about 10 cycles.

    Note that the performance is independent of the data(!), which is a great benefit of this answer I think. The solution is branchless, and loopless, and cannot suffer from failing branch prediction.


    /*  gcc -O3 -m64 -Wall -march=skylake select_bits.c    */
    #include <immintrin.h>
    #include <stdio.h>
    #include <stdint.h>
    
    int print_sse_128_bin(__m128i x);
    __m128i bit_128_k(unsigned int k);
    __m128i mm_bitreverse_epi64(__m128i x);
    __m128i mm_revadd_epi64(__m128i x, __m128i y);
    
    
    /* Select specific pieces of contiguous bits from `MASK` based on selector `IN`  */
    __m128i mm_select_bits_epi64(__m128i IN, __m128i MASK){
        __m128i IN_reduced   = _mm_and_si128(IN, MASK);
        /* Compute the average of IN_reduced and MASK with avg(a,b)=(a&b)+((a^b)>>1)  */
        /* (IN_reduced & MASK) + ((IN_reduced ^ MASK) >>1) =                          */
        /* ((IN & MASK) & MASK) + ((IN_reduced ^ MASK) >>1) =                         */
        /* IN_reduced + ((IN_reduced ^ MASK) >>1)                                     */
        __m128i tmp          = _mm_xor_si128(IN_reduced, MASK);
        __m128i tmp_div2     = _mm_srli_epi64(tmp, 1);
        __m128i average      = _mm_add_epi64(IN_reduced, tmp_div2);   /* average is the average */
        __m128i MASK_div2    = _mm_srli_epi64(MASK, 1);
        __m128i leading_bits = _mm_andnot_si128(MASK_div2, average);
        __m128i rev_added    = mm_revadd_epi64(MASK, leading_bits);
        __m128i OUT          = _mm_andnot_si128(rev_added, MASK);
        /* Uncomment the next lines to check the arithmetic */ /*   
        printf("IN           ");print_sse_128_bin(IN           );       
        printf("MASK         ");print_sse_128_bin(MASK         ); 
        printf("IN_reduced   ");print_sse_128_bin(IN_reduced   );       
        printf("tmp          ");print_sse_128_bin(tmp          );       
        printf("tmp_div2     ");print_sse_128_bin(tmp_div2     );       
        printf("average      ");print_sse_128_bin(average      );       
        printf("MASK_div2    ");print_sse_128_bin(MASK_div2    );       
        printf("leading_bits ");print_sse_128_bin(leading_bits );       
        printf("rev_added    ");print_sse_128_bin(rev_added    );       
        printf("OUT          ");print_sse_128_bin(OUT          );       
        printf("\n");*/
        return OUT;       
    }
    
    
    int main(){
        __m128i IN   = _mm_set_epi64x(0b11111110011010110, 0b1100010010010100);
        __m128i MASK = _mm_set_epi64x(0b01011011001111110, 0b0001111010111011);
        __m128i OUT;    
    
        printf("Example 1 \n");
        OUT = mm_select_bits_epi64(IN, MASK);
        printf("IN           ");print_sse_128_bin(IN);
        printf("MASK         ");print_sse_128_bin(MASK);
        printf("OUT          ");print_sse_128_bin(OUT);
        printf("\n\n");
    
                          /*  0b7654321076543210765432107654321076543210765432107654321076543210  */
        IN   = _mm_set_epi64x(0b1000001001001010000010000000100000010000000000100000000111100011, 
                              0b11111110011010111);
        MASK = _mm_set_epi64x(0b1110011110101110111111000000000111011111101101111100011111000001, 
                              0b01011011001111111);
    
        printf("Example 2 \n");
        OUT = mm_select_bits_epi64(IN, MASK);
        printf("IN           ");print_sse_128_bin(IN);
        printf("MASK         ");print_sse_128_bin(MASK);
        printf("OUT          ");print_sse_128_bin(OUT);
        printf("\n\n");
    
        return 0;
    }
    
    
    int print_sse_128_bin(__m128i x){
        for (int i = 127; i >= 0; i--){
            printf("%1u", _mm_testnzc_si128(bit_128_k(i), x));
            if (((i & 7) == 0) && (i > 0)) printf(" ");
        }
        printf("\n");
        return 0;
    }
    
    
    /* From my answer here https://stackoverflow.com/a/39595704/2439725, adapted to 128-bit */
    inline __m128i bit_128_k(unsigned int k){
      __m128i  indices     = _mm_set_epi32(96, 64, 32, 0);
      __m128i  one         = _mm_set1_epi32(1);
    
      __m128i  kvec        = _mm_set1_epi32(k);  
      __m128i  shiftcounts = _mm_sub_epi32(kvec, indices);
      __m128i  kbit        = _mm_sllv_epi32(one, shiftcounts);   
      return kbit;                             
    }
    
    
    /* Copied from Harold's answer https://stackoverflow.com/a/46318399/2439725         */
    /* Adapted to epi64 and __m128i: bit reverse two 64 bit elements                    */
    inline __m128i mm_bitreverse_epi64(__m128i x){
        __m128i shufbytes = _mm_setr_epi8(7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8); 
        __m128i luthigh = _mm_setr_epi8(0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15);
        __m128i lutlow = _mm_slli_epi16(luthigh, 4);
        __m128i lowmask = _mm_set1_epi8(15);
        __m128i rbytes = _mm_shuffle_epi8(x, shufbytes);
        __m128i high = _mm_shuffle_epi8(lutlow, _mm_and_si128(rbytes, lowmask));
        __m128i low = _mm_shuffle_epi8(luthigh, _mm_and_si128(_mm_srli_epi16(rbytes, 4), lowmask));
        return _mm_or_si128(low, high);
    }
    
    
    /* Add in the reverse direction: With a carry from left to */
    /* right, instead of right to left                         */
    inline __m128i mm_revadd_epi64(__m128i x, __m128i y){
        x = mm_bitreverse_epi64(x);
        y = mm_bitreverse_epi64(y);
        __m128i sum = _mm_add_epi64(x, y);
        return mm_bitreverse_epi64(sum);
    }
    /* End of SSE code */
    
    
    /************* AVX2 code starts here ********************************************/
    
    /*  gcc -O3 -m64 -Wall -march=skylake select_bits256.c    */
    #include <immintrin.h>
    #include <stdio.h>
    #include <stdint.h>
    
    int print_avx_256_bin(__m256i x);
    __m256i bit_256_k(unsigned int k);
    __m256i mm256_bitreverse_epi64(__m256i x);
    __m256i mm256_revadd_epi64(__m256i x, __m256i y);
    
    
    /* Select specific pieces of contiguous bits from `MASK` based on selector `IN`  */
    __m256i mm256_select_bits_epi64(__m256i IN, __m256i MASK){
        __m256i IN_reduced   = _mm256_and_si256(IN, MASK);
        /* Compute the average of IN_reduced and MASK with avg(a,b)=(a&b)+((a^b)>>1)  */
        /* (IN_reduced & MASK) + ((IN_reduced ^ MASK) >>1) =                          */
        /* ((IN & MASK) & MASK) + ((IN_reduced ^ MASK) >>1) =                         */
        /* IN_reduced + ((IN_reduced ^ MASK) >>1)                                     */
        __m256i tmp          = _mm256_xor_si256(IN_reduced, MASK);
        __m256i tmp_div2     = _mm256_srli_epi64(tmp, 1);
        __m256i average      = _mm256_add_epi64(IN_reduced, tmp_div2);   /* average is the average */
        __m256i MASK_div2    = _mm256_srli_epi64(MASK, 1);
        __m256i leading_bits = _mm256_andnot_si256(MASK_div2, average);
        __m256i rev_added    = mm256_revadd_epi64(MASK, leading_bits);
        __m256i OUT          = _mm256_andnot_si256(rev_added, MASK);
        /* Uncomment the next lines to check the arithmetic */ /*   
        printf("IN           ");print_avx_256_bin(IN           );       
        printf("MASK         ");print_avx_256_bin(MASK         ); 
        printf("IN_reduced   ");print_avx_256_bin(IN_reduced   );       
        printf("tmp          ");print_avx_256_bin(tmp          );       
        printf("tmp_div2     ");print_avx_256_bin(tmp_div2     );       
        printf("average      ");print_avx_256_bin(average      );       
        printf("MASK_div2    ");print_avx_256_bin(MASK_div2    );       
        printf("leading_bits ");print_avx_256_bin(leading_bits );       
        printf("rev_added    ");print_avx_256_bin(rev_added    );       
        printf("OUT          ");print_avx_256_bin(OUT          );       
        printf("\n");*/
        return OUT;       
    }
    
    
    int main(){
        __m256i IN   = _mm256_set_epi64x(0b11111110011010110, 
                                         0b1100010010010100,
                                         0b1000001001001010000010000000100000010000000000100000000111100011, 
                                         0b11111110011010111
        );
        __m256i MASK = _mm256_set_epi64x(0b01011011001111110, 
                                         0b0001111010111011,
                                         0b1110011110101110111111000000000111011111101101111100011111000001, 
                                         0b01011011001111111);
        __m256i OUT;    
    
        printf("Example \n");
        OUT = mm256_select_bits_epi64(IN, MASK);
        printf("IN           ");print_avx_256_bin(IN);
        printf("MASK         ");print_avx_256_bin(MASK);
        printf("OUT          ");print_avx_256_bin(OUT);
        printf("\n");
    
        return 0;
    }
    
    
    int print_avx_256_bin(__m256i x){
        for (int i=255;i>=0;i--){
            printf("%1u",_mm256_testnzc_si256(bit_256_k(i),x));
            if (((i&7) ==0)&&(i>0)) printf(" ");
        }
        printf("\n");
        return 0;
    }
    
    
    /* From my answer here https://stackoverflow.com/a/39595704/2439725 */
    inline __m256i bit_256_k(unsigned int k){
      __m256i  indices     = _mm256_set_epi32(224,192,160,128,96,64,32,0);
      __m256i  one         = _mm256_set1_epi32(1);
    
      __m256i  kvec        = _mm256_set1_epi32(k);  
      __m256i  shiftcounts = _mm256_sub_epi32(kvec, indices);
      __m256i  kbit        = _mm256_sllv_epi32(one, shiftcounts);   
      return kbit;                             
    }
    
    
    /* Copied from Harold's answer https://stackoverflow.com/a/46318399/2439725         */
    /* Adapted to epi64: bit reverse four 64 bit elements                    */
    inline __m256i mm256_bitreverse_epi64(__m256i x){
        __m256i shufbytes = _mm256_setr_epi8(7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8);
        __m256i luthigh = _mm256_setr_epi8(0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15, 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15);
        __m256i lutlow = _mm256_slli_epi16(luthigh, 4);
        __m256i lowmask = _mm256_set1_epi8(15);
        __m256i rbytes = _mm256_shuffle_epi8(x, shufbytes);
        __m256i high = _mm256_shuffle_epi8(lutlow, _mm256_and_si256(rbytes, lowmask));
        __m256i low = _mm256_shuffle_epi8(luthigh, _mm256_and_si256(_mm256_srli_epi16(rbytes, 4), lowmask));
        return _mm256_or_si256(low, high);
    }
    
    
    /* Add in the reverse direction: With a carry from left to */
    /* right, instead of right to left                         */
    inline __m256i mm256_revadd_epi64(__m256i x, __m256i y){
        x = mm256_bitreverse_epi64(x);
        y = mm256_bitreverse_epi64(y);
        __m256i sum = _mm256_add_epi64(x, y);
        return mm256_bitreverse_epi64(sum);
    }
    


    Output of the SSE code with an uncommented debugging section:

    Example 1 
    IN           00000000 00000000 00000000 00000000 00000000 00000001 11111100 11010110 00000000 00000000 00000000 00000000 00000000 00000000 11000100 10010100
    MASK         00000000 00000000 00000000 00000000 00000000 00000000 10110110 01111110 00000000 00000000 00000000 00000000 00000000 00000000 00011110 10111011
    IN_reduced   00000000 00000000 00000000 00000000 00000000 00000000 10110100 01010110 00000000 00000000 00000000 00000000 00000000 00000000 00000100 10010000
    tmp          00000000 00000000 00000000 00000000 00000000 00000000 00000010 00101000 00000000 00000000 00000000 00000000 00000000 00000000 00011010 00101011
    tmp_div2     00000000 00000000 00000000 00000000 00000000 00000000 00000001 00010100 00000000 00000000 00000000 00000000 00000000 00000000 00001101 00010101
    average      00000000 00000000 00000000 00000000 00000000 00000000 10110101 01101010 00000000 00000000 00000000 00000000 00000000 00000000 00010001 10100101
    MASK_div2    00000000 00000000 00000000 00000000 00000000 00000000 01011011 00111111 00000000 00000000 00000000 00000000 00000000 00000000 00001111 01011101
    leading_bits 00000000 00000000 00000000 00000000 00000000 00000000 10100100 01000000 00000000 00000000 00000000 00000000 00000000 00000000 00010000 10100000
    rev_added    00000000 00000000 00000000 00000000 00000000 00000000 01001001 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000001 01000111
    OUT          00000000 00000000 00000000 00000000 00000000 00000000 10110110 01111110 00000000 00000000 00000000 00000000 00000000 00000000 00011110 10111000
    
    IN           00000000 00000000 00000000 00000000 00000000 00000001 11111100 11010110 00000000 00000000 00000000 00000000 00000000 00000000 11000100 10010100
    MASK         00000000 00000000 00000000 00000000 00000000 00000000 10110110 01111110 00000000 00000000 00000000 00000000 00000000 00000000 00011110 10111011
    OUT          00000000 00000000 00000000 00000000 00000000 00000000 10110110 01111110 00000000 00000000 00000000 00000000 00000000 00000000 00011110 10111000
    
    
    Example 2 
    IN           10000010 01001010 00001000 00001000 00010000 00000010 00000001 11100011 00000000 00000000 00000000 00000000 00000000 00000001 11111100 11010111
    MASK         11100111 10101110 11111100 00000001 11011111 10110111 11000111 11000001 00000000 00000000 00000000 00000000 00000000 00000000 10110110 01111111
    IN_reduced   10000010 00001010 00001000 00000000 00010000 00000010 00000001 11000001 00000000 00000000 00000000 00000000 00000000 00000000 10110100 01010111
    tmp          01100101 10100100 11110100 00000001 11001111 10110101 11000110 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000010 00101000
    tmp_div2     00110010 11010010 01111010 00000000 11100111 11011010 11100011 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001 00010100
    average      10110100 11011100 10000010 00000000 11110111 11011100 11100100 11000001 00000000 00000000 00000000 00000000 00000000 00000000 10110101 01101011
    MASK_div2    01110011 11010111 01111110 00000000 11101111 11011011 11100011 11100000 00000000 00000000 00000000 00000000 00000000 00000000 01011011 00111111
    leading_bits 10000100 00001000 10000000 00000000 00010000 00000100 00000100 00000001 00000000 00000000 00000000 00000000 00000000 00000000 10100100 01000000
    rev_added    00010000 01100001 00000010 00000001 11000000 01110000 00100000 00100000 00000000 00000000 00000000 00000000 00000000 00000000 01001001 00000000
    OUT          11100111 10001110 11111100 00000000 00011111 10000111 11000111 11000001 00000000 00000000 00000000 00000000 00000000 00000000 10110110 01111111
    
    IN           10000010 01001010 00001000 00001000 00010000 00000010 00000001 11100011 00000000 00000000 00000000 00000000 00000000 00000001 11111100 11010111
    MASK         11100111 10101110 11111100 00000001 11011111 10110111 11000111 11000001 00000000 00000000 00000000 00000000 00000000 00000000 10110110 01111111
    OUT          11100111 10001110 11111100 00000000 00011111 10000111 11000111 11000001 00000000 00000000 00000000 00000000 00000000 00000000 10110110 01111111
    
    0 讨论(0)
  • 2021-02-19 21:15

    The following approach needs only a single loop, with the number of iterations equal to the number of 'groups' found. I don't know if it will be more efficient than your approach; there's 6 arith/bitwise operations in each iteration.

    In pseudo code (C-like):

    OUT = 0;
    a = MASK;
    while (a)
    {
        e = a & ~(a + (a & (-a)));
        if (e & IN) OUT |= e;
        a ^= e;
    }
    

    Here's how it works, step by step, using 11010111 as an example mask:

    OUT = 0
    
    a = MASK        11010111
    c = a & (-a)    00000001   keeps rightmost one only
    d = a + c       11011000   clears rightmost group (and set the bit to its immediate left)
    e = a & ~d      00000111   keeps rightmost group only
    
    if (e & IN) OUT |= e;      adds group to OUT
    
    a = a ^ e       11010000   clears rightmost group, so we can proceed with the next group
    c = a & (-a)    00010000
    d = a + c       11100000
    e = a & ~d      00010000
    
    if (e & IN) OUT |= e;
    
    a = a ^ e       11000000
    c = a & (-a)    01000000
    d = a + c       00000000   (ignoring carry when adding)
    e = a & ~d      11000000
    
    if (e & IN) OUT |= e;
    
    a = a ^ e       00000000   done
    

    As pointed out @PeterCordes, some operations could be optimized using x86 BMI1 instructions:

    • c = a & (-a): blsi
    • e = a & ~d: andn

    This approach is good for processor architectures that do not support bitwise reversal. On architectures that do have a dedicated instruction to reverse the order of bits in an integer, wim's answer is more efficient.

    0 讨论(0)
提交回复
热议问题