Optimising an 1D heat equation using SIMD

久未见 提交于 2020-01-13 05:35:11

问题


I am using a CFD code (for computational fluid dynamic). I recently had the chance to see Intel Compiler using SSE in one of my loops, adding a nearly 2x factor to computation performances in this loop. However, the use of SSE and SIMD instructions seems more like luck. Most of the time, the compiler do nothing.

I am then trying to force the use of SSE, considering that AVX instructions will reinforce this aspect in the near future.

I made a simple 1D heat transfer code. It consist of two phases, using the results of the other (U0 -> U1, then U1 -> U0, then U0 -> U1, etc). When it iterates, it converge to the stable solution. Most of my loops in the main code are using the same kind of calculation. (Finite Difference).

However, my code is two times slower than the normal loop. Results are the sames, so calculations are consistent.

Did I make a mistake ? I am using a Core 2 to test the loop, before testing on the super computer (using Westmer).

Here is the code, with the SSE loop, and then the reference loop :

#include <stdio.h>
#include <emmintrin.h>
#include <time.h>
//#include <vector>
#define n1 1004
#define niter 200000

int i,j,t;

double U0[n1] __attribute__ ((aligned(16)));
double U1[n1] __attribute__ ((aligned(16)));
double Dx,Dy,Lx,Ly,InvDxDx,Dt,alpha,totaltime,Stab,DtAlpha,DxDx;

__m128d vmmx00;
__m128d vmmx01;
__m128d vmmx02;
__m128d vmmx10;

__m128d va;
__m128d vb;
__m128d vc;
__m128d vd;

clock_t time0,time1;
FILE *f1;
int main()
{
/* ---- GENERAL ---- */
   alpha = 0.4;
   totaltime = 1.0/100.0;
   Dt = totaltime/((niter-1)*1.0);
   Lx = 1.0;
   Dx = Lx/((n1-1)*1.0);
   InvDxDx = 1.0/(Dx*Dx);
   DxDx = Dx*Dx;
   Stab = alpha*Dt*(InvDxDx);
   DtAlpha = Dt*alpha;
/* Stability if result <= 0.5 */
   printf("Stability factor : %f \n",Stab);


   for( i = 0; i < n1; i++){U0[i] = 0.0;}
   U0[1] = 1.0;
   U0[2] = 1.0;
   U0[3] = 1.0;
   U0[n1-2] = 2.0;

//    for ( i = 0; i < n1; i++) {
 //       for ( j = i + 1; j < n2; j++) {
  //          std::swap(U0[i][j], U0[j][i]);
   //     }
    //}

    va = _mm_set1_pd(-2.0);
    vb = _mm_set1_pd(InvDxDx);
    vd = _mm_set1_pd(DtAlpha);

 time0=clock();

 for( t = 0; t < niter; t++)
 {

    for( i = 2; i < n1-2; i+=2)
    {
      //printf("%d  %d  \n",i,j);
      //fflush(stdout);
      vmmx00 = _mm_load_pd(&U0[i]);
      vmmx01 = _mm_loadu_pd(&U0[i+1]);
      vmmx02 = _mm_loadu_pd(&U0[i-1]);

      vmmx10 = _mm_mul_pd(va,vmmx00);   // U1[i][j] = -2.0*U0[i][j];
      vmmx10 = _mm_add_pd(vmmx10,vmmx01);   // U1[i][j] = U1[i][j] + U0[i+1][j];
      vmmx10 = _mm_add_pd(vmmx10,vmmx02);   // U1[i][j] = U1[i][j] + U0[i-1][j];
      vmmx10 = _mm_mul_pd(vb,vmmx10);   // U1[i][j] = U1[i][j] * InvDxDx;

      vmmx10 = _mm_mul_pd(vd,vmmx10);   // U1[i][j] = U1[i][j] * DtAlpha;
      vmmx10 = _mm_add_pd(vmmx10,vmmx00);   // U1[i][j] = U1[i][j] + U0[i][j];

      _mm_store_pd(&U1[i],vmmx10);

      // U1[i][j] = U0[i][j] + DtAlpha*( (U0[i+1][j]-2.0*U0[i][j]+U0[i-1][j])*InvDxDx
    }

    for( i = 2; i < n1-2; i+=2)
    {
      //printf("%d  %d  \n",i,j);
      //fflush(stdout);
      vmmx00 = _mm_load_pd(&U1[i]);
      vmmx01 = _mm_loadu_pd(&U1[i+1]);
      vmmx02 = _mm_loadu_pd(&U1[i-1]);

      vmmx10 = _mm_mul_pd(va,vmmx00);   // U0[i][j] = -2.0*U1[i][j];
      vmmx10 = _mm_add_pd(vmmx10,vmmx01);   // U0[i][j] = U0[i][j] + U1[i+1][j];
      vmmx10 = _mm_add_pd(vmmx10,vmmx02);   // U0[i][j] = U0[i][j] + U1[i-1][j];
      vmmx10 = _mm_mul_pd(vb,vmmx10);   // U0[i][j] = U0[i][j] * InvDxDx;

      vmmx10 = _mm_mul_pd(vd,vmmx10);   // U0[i][j] = U0[i][j] * DtAlpha;
      vmmx10 = _mm_add_pd(vmmx10,vmmx00);   // U0[i][j] = U0[i][j] + U1[i][j];

      _mm_store_pd(&U0[i],vmmx10);

      // U1[i][j] = U0[i][j] + DtAlpha*( (U0[i+1][j]-2.0*U0[i][j]+U0[i-1][j])*InvDxDx
    }

 }

 time1=clock();
 printf("Loop 0, total time : %f \n", (double) time1-time0);
 f1 = fopen ("out0.dat", "wt");
 for( i = 1; i < n1-1; i++)
 {
       fprintf (f1, "%d\t%f\n", i, U0[i]);
 }


// REF

   for( i = 0; i < n1; i++){U0[i] = 0.0;}
   U0[1] = 1.0;
   U0[2] = 1.0;
   U0[3] = 1.0;
   U0[n1-2] = 2.0;

 time0=clock();

 for( t = 0; t < niter; t++)
 {

    for( i = 2; i < n1-2; i++)
    {
       U1[i] = U0[i] + DtAlpha* (U0[i+1]-2.0*U0[i]+U0[i-1])*InvDxDx;
    }

    for( i = 2; i < n1-2; i++)
    {
       U0[i] = U1[i] + DtAlpha* (U1[i+1]-2.0*U1[i]+U1[i-1])*InvDxDx;
    }

 }

 time1=clock();
 printf("Loop 0, total time : %f \n", (double) time1-time0);
 f1 = fopen ("outref.dat", "wt");
 for( i = 1; i < n1-1; i++)
 {
       fprintf (f1, "%d\t%f\n", i, U0[i]);
 }



}

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Edit :

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Considering your answers, I found the right place to discuss this, so I will enlarge the topic and explain my aims. If you accept, we will discuss about all of the loops one after an other. This can be long, but it could be extremely useful for a lot of people in my domain, and for OpenSource solvers like OpenFoam. Not considering the inpact on energy consumption (we all use large supercalculators).

The CFD code I am using takes more than 1 month running on 512 Westmer Cores. I am using MPI (Message Passing Interface) to communicate between procs. The Physic field can be considered as a mesh, so 1D, 2D or 3D arrays, depending of the type of simulations. But 3D are the best you can imagine.

The full code is in Fortran 95, which is in fact a C simplified. It is easy to interface it with C, and C routines can be call directly from Fortran, types are the sames (int, double, long, etc). But, Fortran does not allow such optimizations : it is designed to be simple. That is why I am investigating C instructions.

In all CFD codes, we are facing the sames problems : 3 types of loops, and MPI memory distribution. Let's first discuss about the loops :

  • Spatial derivatives (called finite difference) : The loop consist of a 1D convolution for all cases (1D, 2D, 3D, you derivate only on one axe at a time) (DF = F[i-1]*A + F[i]*B + F[i+1]*C). However, when using more than 1D, the memory access become the following :

    // x1 derivative
    for i 1 -> n1
        for j 1 ->n2
            DF_x1[i][j] = F[i-1][j]*A + F[i][j]*B + F[i+1][j]*C
    
    // x2 derivative
    for i 1 -> n1
        for j 1 ->n2
            DF_x2[i][j] = F[i][j-1]*D + F[i][j]*E + F[i][j+1]*G
    

    In the first loop, memory access is not continue (the invert in Fortran, memory is inverted). This is the first problem. Idem when using 3D arrays.

  • Poisson equation resolution, i.e. matrix multiplication : The loop consist of a 1D, 2D or 3D convolution, depending of the simulation. This is in fact a second derivative (DDF = D(DF)).

    for i 1 -> n1
        for j 1 ->n2
            DDF[i][j] = F[i-1][j]*A + F[i][j]*B + F[i+1][j]*C + F[i][j-1]*D + F[i][j]*E + F[i][j+1]*G
    

    This loop is the same as the loop i first gave you, but it is computed directly, not even and odd.

  • Weighted Gauss Seidel resolution, i.e. the same loop as bellow, but with dependency :

    // even
    for i 1 -> n1
        for j 1 ->n2
            F1[i][j] = F0[i-1][j]*A + F0[i][j]*B + F0[i+1][j]*C + F0[i][j-1]*D + F0[i][j]*E + F0[i][j+1]*G
    
    //odd
    for i 1 -> n1
        for j 1 ->n2
            F0[i][j] = F1[i-1][j]*A + F1[i][j]*B + F1[i+1][j]*C + F1[i][j-1]*D + F1[i][j]*E + F1[i][j+1]*G
    

    This is the loop you investigated before.

Then, we face an other problem : memory distribution. Each core has its own memory, and need to share it with others. Let's consider the last loop, but simplified :

    for t 1 -> niter

        // even
        for i 1 -> n1-2
                F1[i] = F0[i-1]*A + F0[i]*B + F0[i+1]*C

        //odd
        for i 1 -> n1-2
                F0[i] = F1[i-1]*A + F1[i]*B + F1[i+1]*C

Consider that n1=512, but this can not be stored in the local memory due to low RAM capacity. The memory is distributed on core0 (1->255) and core1 (256-512) that are NOT on the same computer, but on a network. In this case, the derivative in i=256 need to be aware of the point i=255, but this value is on the other proc. The memory containing the values of the other processors is called GHOST memory. So the loop is :

    ! update boundary memory :
    Share to ghost : core0 : F0[255] -> Network -> F0[0] : core1 (don't forget that for core1, the array restart from 0)
    Share to ghost : core1 : F0[1] -> Network -> F0[256] : core0 (you understand that F0[256] is the ghost for core0, and F0[0] is the ghost for core1)
    // even, each core do this loop.
    for i 1 -> n1-2
            F1[i] = F0[i-1]*A + F0[i]*B + F0[i+1]*C

    ! update boundary memory :
    Share to ghost : core0 : F1[255] -> Network -> F1[0] : core1
    Share to ghost : core1 : F1[1] -> Network -> F1[256] : core0

    //odd, each core do this loop.
    for i 1 -> n1-2
            F0[i] = F1[i-1]*A + F1[i]*B + F1[i+1]*C

We need to deal with this. Mysticial, you now see where i am going : the loop interlacing need to take this into account. I think this can be done this way :

        ! update boundary memory :
        Share to ghost : core0 : F0[255] -> Network -> F0[0] : core1 
        Share to ghost : core1 : F0[1] -> Network -> F0[256] : core0

        for t 1 -> niter

        ! compute borders in advance :
        core0 only : F1[255] = F0[254]*A + F0[255]*B + F0[256]*C
        core1 only : F1[1] = F0[0]*A + F0[1]*B + F0[2]*C

        Launch Share to ghost asynchronous : core0 : F1[255] -> Network -> F1[0] : core1 
        Launch Share to ghost asynchronous : core1 : F1[1] -> Network -> F1[256] : core0

        During the same time (this can be done at the same time because MPI support asynchronous communications)
        // even
        for i 2 -> n1-3 (note the reduced domain)
                F1[i] = F0[i-1]*A + F0[i]*B + F0[i+1]*C

        Check that communications are done.


        ! compute borders in advance :
        core0 only : F0[255] = F1[254]*A + F1[255]*B + F1[256]*C
        core1 only : F0[1] = F1[0]*A + F1[1]*B + F1[2]*C

        Launch Share to ghost asynchronous : core0 : F0[255] -> Network -> F0[0] : core1 
        Launch Share to ghost asynchronous : core1 : F0[1] -> Network -> F0[256] : core0

        //odd, each core do this loop.
        for i 2 -> n1-3
                F0[i] = F1[i-1]*A + F1[i]*B + F1[i+1]*C

        Check that communications are done.

Hope I didn't make a mistake with indices somewhere. Let's consider the first type of loops for the moment, which are the simpliest, and the we can go through loop 2 and 3 after, which are similar. The aim is to do this (which is similar to image processing):

    // x1 derivative
    for i 1 -> n1
        for j 1 ->n2
            DF_x1[i][j] = F[i-1][j]*A + F[i][j]*B + F[i+1][j]*C

    // x2 derivative
    for i 1 -> n1
        for j 1 ->n2
            DF_x2[i][j] = F[i][j-1]*D + F[i][j]*E + F[i][j+1]*G

I am working on it, and I will post the result code in a few hours, taking into account your recommendations.


回答1:


The wise words of Mysticial in code:

  xmm7 = InvDxDx * dtAlpha; // precalculate
  xmm6 = -2*xmm7;
  xmm0 = *ptr++;   // has values d0, d1
  xmm1 = *ptr++;   // has values d2, d3
  while (loop--) {
     xmm2  = xmm0;  // take a copy of d0,d1
     xmm0 += xmm1;  // d0+d2, d1+d3
     xmm2 = shufps (xmm2,xmm1, 0x47);  // the middle elements d1,d2 ??
     xmm0 *= xmm7;  // sum of outer elements * factor
     xmm2 *= xmm6;  // -2 * center element * factor
     // here's still a nasty dependency
     xmm2 += xmm0;
     xmm0 = xmm1;    // shift registers
     *out_ptr ++= xmm2;  // flush
     xmm1 = *ptr++;  // read in new values

  }

This can be improved by dividing the loop into 2 segments, each working 502 entries apart or from different tasks and interleaving the results, which breaks the dependency chain.

Also, the shift register approach xmm0 <- xmm1, xmm1 <- new can be avoided by unrolling the loop twice and changing the meanings of xmm0 and xmm1 every other case.

This points another big issue: when using intrinsics, registers are spilled to memory all the time.



来源:https://stackoverflow.com/questions/13476223/optimising-an-1d-heat-equation-using-simd

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!