I need some idea how to write a C++ cross platform implementation of a few parallelizable problems in a way so I can take advantage of SIMD (SSE, SPU, etc) if available. As well
Notice that the given example decides what to execute at compile time (since you're using the preprocessor), in this case you can use more complex techniques to decide what you actually want to execute; For example, Tag Dispatch: http://cplusplus.co.il/2010/01/03/tag-dispatching/ Following the example shown there, you could have the fast implementation be with SIMD, and the slow without.
Have you thought about using existing solutions like liboil? It implements lots of common SIMD operations and can decide at runtime whether to use SIMD/non-SIMD code (using function pointers assigned by an initialization function).
If someone is interested this is the dirty code I come with to test a new idea that I came with while reading about the library that Paul posted.
Thanks Paul!
// This is just a conceptual test
// I haven't profile the code and I haven't verified if the result is correct
#include <xmmintrin.h>
// This class is doing all the math
template <bool SIMD>
class cStreamF32
{
private:
void* m_data;
void* m_dataEnd;
__m128* m_current128;
float* m_current32;
public:
cStreamF32(int size)
{
if (SIMD)
m_data = _mm_malloc(sizeof(float) * size, 16);
else
m_data = new float[size];
}
~cStreamF32()
{
if (SIMD)
_mm_free(m_data);
else
delete[] (float*)m_data;
}
inline void Begin()
{
if (SIMD)
m_current128 = (__m128*)m_data;
else
m_current32 = (float*)m_data;
}
inline bool Next()
{
if (SIMD)
{
m_current128++;
return m_current128 < m_dataEnd;
}
else
{
m_current32++;
return m_current32 < m_dataEnd;
}
}
inline void operator=(const __m128 x)
{
*m_current128 = x;
}
inline void operator=(const float x)
{
*m_current32 = x;
}
inline __m128 operator+(const cStreamF32<true>& x)
{
return _mm_add_ss(*m_current128, *x.m_current128);
}
inline float operator+(const cStreamF32<false>& x)
{
return *m_current32 + *x.m_current32;
}
inline __m128 operator+(const __m128 x)
{
return _mm_add_ss(*m_current128, x);
}
inline float operator+(const float x)
{
return *m_current32 + x;
}
inline __m128 operator*(const cStreamF32<true>& x)
{
return _mm_mul_ss(*m_current128, *x.m_current128);
}
inline float operator*(const cStreamF32<false>& x)
{
return *m_current32 * *x.m_current32;
}
inline __m128 operator*(const __m128 x)
{
return _mm_mul_ss(*m_current128, x);
}
inline float operator*(const float x)
{
return *m_current32 * x;
}
};
// Executes both functors
template<class T1, class T2>
void Execute(T1& functor1, T2& functor2)
{
functor1.Begin();
do
{
functor1.Exec();
}
while (functor1.Next());
functor2.Begin();
do
{
functor2.Exec();
}
while (functor2.Next());
}
// This is the implementation of the problem
template <bool SIMD>
class cTestFunctor
{
private:
cStreamF32<SIMD> a;
cStreamF32<SIMD> b;
cStreamF32<SIMD> c;
public:
cTestFunctor() : a(1024), b(1024), c(1024) { }
inline void Exec()
{
c = a + b * a;
}
inline void Begin()
{
a.Begin();
b.Begin();
c.Begin();
}
inline bool Next()
{
a.Next();
b.Next();
return c.Next();
}
};
int main (int argc, char * const argv[])
{
cTestFunctor<true> functor1;
cTestFunctor<false> functor2;
Execute(functor1, functor2);
return 0;
}
You might want to take a glance at my attempt at SIMD/non-SIMD:
vrep, a templated base class with specializations for SIMD (note how it distinguishes between floats-only SSE, and SSE2, which introduced integer vectors.).
More useful v4f, v4i etc classes (subclassed via intermediate v4).
Of course it's far more geared towards 4-element vectors for rgba/xyz type calculations than SoA, so will completely run out of steam when 8-way AVX comes along, but the general principles might be useful.
You might want to look at the source for the MacSTL library for some ideas in this area: www.pixelglow.com/macstl/
The most impressive approach to SIMD-scaling I've seen is the RTFact ray-tracing framework: slides, paper. Well worth a look. The researchers are closely associated with Intel (Saarbrucken now hosts the Intel Visual Computing Institute) so you can be sure forward scaling onto AVX and Larrabee was on their minds.
Intel's Ct "data parallelism" template library looks quite promising too.