numerical-integration

Numerical Integration over a Matrix of Functions, SymPy and SciPy

五迷三道 提交于 2019-12-01 16:48:21
From my SymPy output I have the matrix shown below, which I must integrate in 2D. Currently I am doing it element-wise as shown below. This method works but it gets too slow (for both sympy.mpmath.quad and scipy.integrate.dblquad ) for my real case (in which A and its functions are much bigger (see edit below): from sympy import Matrix, sin, cos import sympy import scipy sympy.var( 'x, t' ) A = Matrix([[(sin(2-0.1*x)*sin(t)*x+cos(2-0.1*x)*cos(t)*x)*cos(3-0.1*x)*cos(t)], [(cos(2-0.1*x)*sin(t)*x+sin(2-0.1*x)*cos(t)*x)*sin(3-0.1*x)*cos(t)], [(cos(2-0.1*x)*sin(t)*x+cos(2-0.1*x)*sin(t)*x)*sin(3-0.1

Monte Carlo integration using importance sampling given a proposal function

痞子三分冷 提交于 2019-12-01 14:11:37
Given a Laplace Distribution proposal: g(x) = 1/2*e^(-|x|) and sample size n = 1000 , I want to Conduct the Monte Carlo (MC) integration for estimating θ: via importance sampling. Eventually I want to calculate the mean and standard deviation of this MC estimate in R once I get there. Edit (arrived late after the answer below) This is what I have for my R code so far: library(VGAM) n = 1000 x = rexp(n,0.5) hx = mean(2*exp(-sqrt(x))*(sin(x))^2) gx = rlaplace(n, location = 0, scale = 1) Now we can write a simple R function to sample from Laplace distribution: ## `n` is sample size rlaplace <-

Calculation of areas between two curves

两盒软妹~` 提交于 2019-12-01 11:56:16
I have a code contain a curve and a line. I know how to fill the areas below and under the line but I need to calculate the areas values of each one. Here is the code: import matplotlib.pyplot as plt import numpy as np x = np.arange(0.0, 2, 0.01) y1 = np.sin(2*np.pi*x) y2 = 0*x fig, ax = plt.subplots(1, 1, sharex=True) ax.plot(x, y1, x, y2, color='black') ax.fill_between(x, y1, y2, where=y2 >= y1, facecolor='green', interpolate=True) ax.fill_between(x, y1, y2, where=y2 <= y1, facecolor='red', interpolate=True) plt.show() Any help? pylang Adapted from the scipy.integrate.quad docs example for a

Monte Carlo integration using importance sampling given a proposal function

十年热恋 提交于 2019-12-01 10:31:23
问题 Given a Laplace Distribution proposal: g(x) = 1/2*e^(-|x|) and sample size n = 1000 , I want to Conduct the Monte Carlo (MC) integration for estimating θ: via importance sampling. Eventually I want to calculate the mean and standard deviation of this MC estimate in R once I get there. Edit (arrived late after the answer below) This is what I have for my R code so far: library(VGAM) n = 1000 x = rexp(n,0.5) hx = mean(2*exp(-sqrt(x))*(sin(x))^2) gx = rlaplace(n, location = 0, scale = 1) 回答1:

Using odeint function definition

北城余情 提交于 2019-12-01 08:40:55
Pretty noob question so please bear with me. I am following the example given here--> http://www.codeproject.com/Articles/268589/odeint-v2-Solving-ordinary-differential-equations In particular, I am looking at this function: void lorenz( state_type &x , state_type &dxdt , double t ) { dxdt[0] = sigma * ( x[1] - x[0] ); dxdt[1] = R * x[0] - x[1] - x[0] * x[2]; dxdt[2] = x[0]*x[1] - b * x[2]; } In my case, R takes on a series of values (vector with 100 doubles). odeint is called as: integrate_const( runge_kutta4< state_type >() , lorenz , x , 0.0 , 10.0 , dt ); I would like to do this for each

Different intervals for Gauss-Legendre quadrature in numpy

依然范特西╮ 提交于 2019-12-01 04:23:57
How can we use the NumPy package numpy.polynomial.legendre.leggauss over intervals other than [-1, 1] ? The following example compares scipy.integrate.quad to the Gauss-Legendre method over the interval [-1, 1] . import numpy as np from scipy import integrate # Define function and interval a = -1. b = 1. f = lambda x: np.cos(x) # Gauss-Legendre (default interval is [-1, 1]) deg = 6 x, w = np.polynomial.legendre.leggauss(deg) gauss = sum(w * f(x)) # For comparison quad, quad_err = integrate.quad(f, a, b) print 'The QUADPACK solution: {0:.12} with error: {1:.12}'.format(quad, quad_err) print

Comparison of odeint's runge_kutta4 with Matlab's ode45

泄露秘密 提交于 2019-11-30 22:53:27
I would like to use runge_kutta4 method in the odeint C++ library . I've solved the problem in Matlab. My following code in Matlab to solve x'' = -x - g*x' , with initial values x1 = 1 , x2 = 0 , is as follows main.m clear all clc t = 0:0.1:10; x0 = [1; 0]; [t, x] = ode45('ODESolver', t, x0); plot(t, x(:,1)); title('Position'); xlabel('time (sec)'); ylabel('x(t)'); ODESolver.m function dx = ODESolver(t, x) dx = zeros(2,1); g = 0.15; dx(1) = x(2); dx(2) = -x(1) - g*x(2); end I've installed the odeint Library. My code for using runge_kutta4 is as follows #include <iostream> #include <boost

Comparison of odeint's runge_kutta4 with Matlab's ode45

柔情痞子 提交于 2019-11-30 17:27:43
问题 I would like to use runge_kutta4 method in the odeint C++ library. I've solved the problem in Matlab. My following code in Matlab to solve x'' = -x - g*x' , with initial values x1 = 1 , x2 = 0 , is as follows main.m clear all clc t = 0:0.1:10; x0 = [1; 0]; [t, x] = ode45('ODESolver', t, x0); plot(t, x(:,1)); title('Position'); xlabel('time (sec)'); ylabel('x(t)'); ODESolver.m function dx = ODESolver(t, x) dx = zeros(2,1); g = 0.15; dx(1) = x(2); dx(2) = -x(1) - g*x(2); end I've installed the

Stop integration after designated length of time in Matlab

孤者浪人 提交于 2019-11-29 17:43:14
I want to stop solving a differential equation in Matlab if it takes more than a designated amount of time. I tried the following,but it doesn't work... options = odeset('AbsTol',1e-8,'RelTol',1e-5); RUNTIME=5; timerID = tic; while (toc(timerID) < RUNTIME) [t_pae,x_th_pae] = ode15s(@prosomoiwsh,[0 t_end],[80*pi/180;0;130*pi/180;0;th_initial(1);th_initial(2);th_initial(3);th_initial(4)],options); end How can I solve this? UPDATE : I tried what horchler suggested and now my code looks like this : interupt_time = 20; outputFun= @(t,y,flag)interuptFun(t,y,flag,interupt_time); options = odeset(

Change a constant in ODE calculations under particular conditions with a flag

泄露秘密 提交于 2019-11-29 17:00:40
I have an ODE for calculating how acidicity changes. Everything is working just fine, only I would like to change a constant whenever acidicity reaches a critical point. It is supposed to be some kind of irreversible effect I wish to simulate. My constants are coming from a structure file (c) I load once in the ODE function. [Time,Results] = ode15s(@(x, c) f1(x, c),[0 c.length],x0,options); The main problem I have here is not telling Matlab to change the constant but remember if it happened already during the simulation once. so Matlab should take the irreversibly changed constant rather than