问题
Why am I seeing different result on GPU compare to sequential CPU?
import numpy
from numba import cuda
from functools import reduce
A = (numpy.arange(100, dtype=numpy.float64)) + 1
cuda.reduce(lambda a, b: a + b * 20)(A)
# result 12952749821.0
reduce(lambda a, b: a + b * 20, A)
# result 100981.0
import numba
numba.__version__
# '0.34.0+5.g1762237'
Similar behavior happens when using Java Stream API to parallelize reduction on CPU:
int n = 10;
float inputArray[] = new float[n];
ArrayList<Float> inputList = new ArrayList<Float>();
for (int i=0; i<n; i++)
{
inputArray[i] = i+1;
inputList.add(inputArray[i]);
}
Optional<Float> resultStream = inputList.stream().parallel().reduce((x, y) -> x+y*20);
float sequentialResult = array[0];
for (int i = 1; i < array.length; i++)
{
sequentialResult = sequentialResult + array[i] * 20;
}
System.out.println("Sequential Result "+sequentialResult);
// Sequential Result 10541.0
System.out.println("Stream Result "+resultStream.get());
// Stream Result 1.2466232E8
回答1:
It seems that, as pointed by Numba's team, lambda a, b: a + b * 20
isn't associative and commutative reduction function which yield to this unexpected result.
来源:https://stackoverflow.com/questions/45357740/custom-reduction-on-gpu-vs-cpu-yield-different-result