montecarlo

Python large iterations number fail

我们两清 提交于 2019-12-05 01:22:29
I wrote simple monte-carlo π calculation program in Python, using multiprocessing module. It works just fine, but when I pass 1E+10 iterations for each worker, some problem occur, and the result is wrong. I cant understand what is the problem, because everything is fine on 1E+9 iterations! import sys from multiprocessing import Pool from random import random def calculate_pi(iters): """ Worker function """ points = 0 # points inside circle for i in iters: x = random() y = random() if x ** 2 + y ** 2 <= 1: points += 1 return points if __name__ == "__main__": if len(sys.argv) != 3: print "Usage:

Testing the quality of PRNGs

戏子无情 提交于 2019-12-04 23:11:59
问题 I am playing around with PRNGs (like Mersenne Twister and rand() function of stdlib) and I would want a good test that would help me ascertain the quality of random data produced by the PRNGs. I have calculated the value of Pi using random numbers generated by the PRNGs, and I find rand() and Mersenne Twister to be very close to offer a distinction (do I need to scrutinize after 10 decimal points?). I do not have much idea about Monte Carlo simulations; please let me know about some algorithm

Directly “plot” line segments to numpy array

China☆狼群 提交于 2019-12-04 17:35:02
One of my first projects realized in python does Monte Carlo simulation of stick percolation. The code grew continually. The first part was the visualization of the stick percolation. In an area of width*length a defined density (sticks/area) of straight sticks with a certain length are plotted with random start coordinates and direction. As I often use gnuplot I wrote the generated (x, y) start and end coordinates to a text file to gnuplot them afterwards. I then found here a nice way to analyze the image data using scipy.ndimage.measurements. The image is read by using ndimage.imread in

Thread error: can't start new thread

自作多情 提交于 2019-12-04 12:05:37
Here's a MWE of a much larger code I'm using. It performs a Monte Carlo integration over a KDE ( kernel density estimate ) for all values located below a certain threshold (the integration method was suggested over at this question: Integrate 2D kernel density estimate ) iteratively for a number of points in a list and returns a list made of these results. import numpy as np from scipy import stats from multiprocessing import Pool import threading # Define KDE integration function. def kde_integration(m_list): # Put some of the values from the m_list into two new lists. m1, m2 = [], [] for

Thrust equivalent of Open MP code

醉酒当歌 提交于 2019-12-04 05:08:10
问题 The code i'm trying to parallelize in open mp is a Monte Carlo that boils down to something like this: int seed = 0; std::mt19937 rng(seed); double result = 0.0; int N = 1000; #pragma omp parallel for for(i=0; x < N; i++) { result += rng() } std::cout << result << std::endl; I want to make sure that the state of the random number generator is shared across threads, and the addition to the result is atomic. Is there a way of replacing this code with something from thrust::omp. From the

Finding PI digits using Monte Carlo

我怕爱的太早我们不能终老 提交于 2019-12-03 17:21:47
I have tried many algorithms for finding π using Monte Carlo. One of the solutions (in Python) is this: def calc_PI(): n_points = 1000000 hits = 0 for i in range(1, n_points): x, y = uniform(0.0, 1.0), uniform(0.0, 1.0) if (x**2 + y**2) <= 1.0: hits += 1 print "Calc2: PI result", 4.0 * float(hits) / n_points The sad part is that even with 1000000000 the precision is VERY bad ( 3.141... ). Is this the maximum precision this method can offer? The reason I choose Monte Carlo was that it's very easy to break it in parallel parts. Is there another algorithm for π that is easy to break into pieces

Code for Monte Carlo simulation: generate samples of given size in R

送分小仙女□ 提交于 2019-12-03 15:04:46
问题 I started by generating a sample of 500 uniformly-distributed random numbers between 0 and 1 using the code below: set.seed(1234) X<-runif(500, min=0, max=1) Now, I need to write a psuedocode that generates 10000 samples of N=500 for a MC simulation, compute the mean of my newly created X, and store the iteration number and mean value in a result object. I have never attempted this, and so far I have this: n.iter <-(10000*500) results <- matrix (0, n.iter, 4) Finally, once this is

How to create a more efficient simulation loop for Monte Carlo in R

余生颓废 提交于 2019-12-03 08:37:51
The purpose of this exercise is to create a population distribution of nutrient intake values. There were repeated measures in the earlier data, these have been removed so each row is a unique person in the data frame. I have this code, which works quite well when tested with a small number of my data frame rows. For all 7135 rows, it is very slow. I tried to time it, but I crashed it out when the elapsed running time on my machine was 15 hours. The system.time results were Timing stopped at: 55625.08 2985.39 58673.87 . I would appreciate any comments on speeding up the simulation: Male.MC <-c

Python Uniform distribution of points on 4 dimensional sphere

风格不统一 提交于 2019-12-03 07:10:49
I need a uniform distribution of points on a 4 dimensional sphere. I know this is not as trivial as picking 3 angles and using polar coordinates. In 3 dimensions I use from random import random u=random() costheta = 2*u -1 #for distribution between -1 and 1 theta = acos(costheta) phi = 2*pi*random x=costheta y=sin(theta)*cos(phi) x=sin(theta)*sin(phi) This gives a uniform distribution of x, y and z. How can I obtain a similar distribution for 4 dimensions? A standard way , though, perhaps not the fastest , is to use Muller's method to generate uniformly distributed points on an N-sphere:

Fast generation of random set, Monte Carlo Simulation

纵饮孤独 提交于 2019-12-03 06:20:23
I have a set of numbers ~100, I wish to perform MC simulation on this set, the basic idea is I fully randomize the set, do some comparison/checks on the first ~20 values, store the result and repeat. Now the actual comparison/check algorithm is extremely fast it actually completes in about 50 CPU cycles. With this in mind, and in order to optimize these simulations I need to generate the random sets as fast as possible. Currently I'm using a Multiply With Carry algorithm by George Marsaglia which provides me with a random integer in 17 CPU cycles, quite fast. However, using the Fisher-Yates