modular arithmetic on the gpu

前端 未结 3 685
名媛妹妹
名媛妹妹 2021-02-04 08:00

I am working on the GPU algorithm which is supposed to do a lot of modular computations. Particularly, various operations on matrices in a finite field which in the long run red

相关标签:
3条回答
  • 2021-02-04 08:34

    There are tricks to efficiently perform mod operations but if only m is radix 2.

    For instance, x mod y == x & (y-1), where y is 2^n. Performing bitwise operation is the fastest.

    Otherwise, probably a look-up table? Below is a link on discussion of efficient modulo implementation. You might need to implement it yourself to get the most out of it.

    Efficient computation of mod

    0 讨论(0)
  • 2021-02-04 08:46

    A high-end Fermi GPU (e.g. a GTX 580) will likely give you the best performance among shipping cards for this. You would want all 32-bit operands to be of type "unsigned int" for best performance, as there is some additional overhead for the handling of signed divisions and modulos.

    The compiler generates very efficient code for division and modulo with fixed divisor As I recall it is usually around three to five machine instructions instructions on Fermi and Kepler. You can check the generated SASS (machine code) with cuobjdump --dump-sass. You might be able to use templated functions with constant divisors if you only use a few different divisors.

    You should see on the order of sixteen inlined SASS instructions being generated for the unsigned 32-bit operations with variable divisor, across Fermi and Kepler. The code is limited by the throughput of integer multiplies and for Fermi-class GPUs is competitive with hardware solutions. Somewhat reduced performance is seen on currently shipping Kepler-class GPUs due to their reduced integer multiply throughput.

    [Added later, after clarification of the question:]

    Unsigned 64-bit division and modulo with variable divisor on the other hand are called subroutines of about 65 instructions on Fermi and Kepler. They look close to optimal. On Fermi, this is still reasonably competitive with hardware implementations (note that 64-bit integer divisions are not exactly super fast on CPUs that provide this as a built-in instruction). Below is some code that I posted to the NVIDIA forums some time back for the kind of task described in the clarification. It avoids the expensive division, but does assume that fairly large batches of operands are sharing the same divisior. It uses double-precision arithmetic, which is especially fast on Tesla-class GPUs (as opposed to consumer cards). I only did a cursory test of the code, you might want to test this more carefully before deploying it.

    // Let b, p, and A[i] be integers < 2^51
    // Let N be a integer on the order of 10000
    // for i from 1 to N
    // A[i] <-- A[i] * b mod p
    
    /*---- kernel arguments ----*/
    unsigned long long *A;
    double b, p; /* convert from unsigned long long to double before passing to kernel */
    double oop;  /* pass precomputed 1.0/p to kernel */
    
    /*---- code inside kernel -----*/
    double a, q, h, l, rem;
    const double int_cvt_magic = 6755399441055744.0; /* 2^52+2^51 */
    
    a = (double)A[i];
    
    /* approximate quotient and round it to the nearest integer */
    q = __fma_rn (a * b, oop, int_cvt_magic);
    q = q - int_cvt_magic;
    
    /* back-multiply, representing p*q as a double-double h:l exactly */
    h = p * q;
    l = __fma_rn (p, q, -h);
    
    /* remainder is double-width product a*b minus double-double h:l */
    rem = __fma_rn (a, b, -h);
    rem = rem - l;
    
    /* remainder may be negative as quotient rounded; fix if necessary */
    if (rem < 0.0) rem += p;
    
    A[i] = (unsigned long long)rem;
    
    0 讨论(0)
  • 2021-02-04 08:48

    Some time ago I experimented a lot with modular arithmetic on the GPU. On Fermi GPUs you can use double-precision arithmetic to avoid expensive div and mod operations. For example, modular multiplication can be done as follows:

    // fast truncation of double-precision to integers
    #define CUMP_D2I_TRUNC (double)(3ll << 51)
    // computes r = a + b subop c unsigned using extended precision
    #define VADDx(r, a, b, c, subop) \
        asm volatile("vadd.u32.u32.u32." subop " %0, %1, %2, %3;" :  \
                "=r"(r) : "r"(a) , "r"(b), "r"(c));
    
    // computes a * b mod m; invk = (double)(1<<30) / m
    __device__ __forceinline__ 
    unsigned mul_m(unsigned a, unsigned b, volatile unsigned m,
        volatile double invk) { 
    
       unsigned hi = __umulhi(a*2, b*2); // 3 flops
       // 2 double instructions
       double rf = __uint2double_rn(hi) * invk + CUMP_D2I_TRUNC;
       unsigned r = (unsigned)__double2loint(rf);
       r = a * b - r * m; // 2 flops
    
       // can also be replaced by: VADDx(r, r, m, r, "min") // == umin(r, r + m);
       if((int)r < 0) 
          r += m;
       return r;
    }
    

    However this only works for 31-bit integer modulos (if 1 bit is not critical for you) and you also need to precompute 'invk' beforehand. This gives absolute minimum of instructions I can achieve, ie.:

    SHL.W R2, R4, 0x1;
    SHL.W R8, R6, 0x1;
    IMUL.U32.U32 R4, R4, R6;
    IMUL.U32.U32.HI R8, R2, R8;
    I2F.F64.U32 R8, R8;
    DFMA R2, R2, R8, R10;
    IMAD.U32.U32 R4, -R12, R2, R4;
    ISETP.GE.AND P0, pt, R4, RZ, pt;
    @!P0 IADD R4, R12, R4;
    

    For description of the algorithm, you can have a look at my paper: gpu_resultants. Other operations like (xy - zw) mod m are also explained there.

    Out of curiosity, I compared the performance of the resultant algorithm using your modular multiplication:

    unsigned r = (unsigned)(((u64)a * (u64)b) % m);
    

    against the optimized version with mul_m.

    Modular arithmetic with default % operation:

    low_deg: 11; high_deg: 2481; bits: 10227
    nmods: 330; n_real_pts: 2482; npts: 2495
    
    res time: 5755.357910 ms; mod_inv time: 0.907008 ms; interp time: 856.015015 ms; CRA time: 44.065857 ms
    GPU time elapsed: 6659.405273 ms; 
    

    Modular arithmetic with mul_m:

    low_deg: 11; high_deg: 2481; bits: 10227
    nmods: 330; n_real_pts: 2482; npts: 2495
    
    res time: 1100.124756 ms; mod_inv time: 0.192608 ms; interp time: 220.615143 ms; CRA time: 10.376352 ms
    GPU time elapsed: 1334.742310 ms; 
    

    So on the average it is about 5x faster. Note also that, you might not see a speed-up if you just evaluate raw arithmetic performance using a kernel with a bunch of mul_mod operations (like saxpy example). But in real applications with control logic, synchronization barriers etc. the speed-up is very noticeable.

    0 讨论(0)
提交回复
热议问题