What is the benefits of using vaddss instead of addss in scalar matrix addition?

前端 未结 1 387
孤独总比滥情好
孤独总比滥情好 2021-01-20 01:54

I have implemented scalar matrix addition kernel.

#include 
#include 
//#include 

//loops and iterations:
#         


        
相关标签:
1条回答
  • 2021-01-20 02:40

    The problem you are observing is explained here. On Skylake systems if the upper half of an AVX register is dirty then there is false dependency for non-vex encoded SSE operations on the upper half of the AVX register. In your case it seems there is a bug in your version of glibc 2.23. On my Skylake system with Ubuntu 16.10 and glibc 2.24 I don't have the problem. You can use

    __asm__ __volatile__ ( "vzeroupper" : : : ); 
    

    to clean the upper half of the AVX register. I don't think you can use an intrinsic such as _mm256_zeroupper to fix this because GCC will say it's SSE code and not recognize the intrinsic. The options -mvzeroupper won't work either because GCC one again thinks it's SSE code and will not emit the vzeroupper instruction.

    BTW, it's Microsoft's fault that the hardware has this problem.


    Update:

    Other people are apparently encountering this problem on Skylake. It has been observed after printf, memset, and clock_gettime.

    If your goal is to compare 128-bit operations with 256-bit operations could consider using -mprefer-avx128 -mavx (which is particularly useful on AMD). But then you would be comparing AVX256 vs AVX128 and not AVX256 vs SSE. AVX128 and SSE both use 128-bit operations but their implementations are different. If you benchmark you should mention which one you used.

    0 讨论(0)
提交回复
热议问题