I\'m investigating using nvidia GPUs for Monte-Carlo simulations. However, I would like to use the gsl random number generators and also a parallel random number generator such
I've just found that NAG provide some RNG routines. These libraries are free for academics.
You will have to implement them by yourself.
The GSL manual recommends the Mersenne Twister.
The Mersenne Twister authors have a version for Nvidia GPUs. I looked into porting this to the R package gputools but found that I needed excessively large number of draws (millions, I think) before the combination of 'generate of GPU and make available to R' was faster than just drawing in R (using only the CPU).
It really is a computation / communication tradeoff.
Massive parallel random generation as you need it for GPUs is a difficult problem. This is an active research topic. You really have to be careful not only to have a good sequential random generator (these you find in the literature) but something that guarantees that they are independent. Pairwise independence is not sufficient for a good Monte Carlo simulation. AFAIK there is no good public domain code available.
Use the Mersenne Twister PRNG, as provided in the CUDA SDK.
Here we use sobol sequences on the GPUs.