I am working on a stochastic process and I wanted to generate different series if random numbers in CUDA kernel each time I run the program. This similar to what we does in C++
You can create more than one global function for random number initialization and generation. or create a loop to go over the global function example: for (int rns = 0; rns < 5; rns++) { // too seed 'loop' times
init << < N, 10 >> > (devState, time(0));
gpuErrchk(cudaMalloc((void**)&gpu_no, N * sizeof(double))); // allocate memory for random numbers on device/GPU
//rndn << < N, 10 >> > (devState, gpu_no);//invoke kernel to launch the random numbers
gpuErrchk(cudaMemcpy(cpu_no, gpu_no, N * sizeof(double), cudaMemcpyDeviceToHost))
} cout << "the transition matrix " << ++generate << " seed generation is: " << init << endl;
This does nt have any noticeable effect on the random number generated. But there is a fear of not being correlated and also lack of convergence in the long run. Why would you like to seed more than once in an iteration anyways. you can use the library function to generate different types of random number distribution like "curand_uniform" curand_normal, curand_poission and so on.
I don't know if this answers your question.