scipy

Curve fit an exponential decay function in Python using given data points

℡╲_俬逩灬. 提交于 2021-02-07 19:19:26
问题 With the curve_fit function in SciPy I'm able to determine the coefficients that represent the curve shown in the plot below. def func2(t, tau): return np.exp(-t / tau) t2 = np.linspace(0, 4, 50) y2 = func2(t2, 1.2) y2_noise = 0.2 * np.random.normal(size=t2.size) y2_curve_noise = y2 + y2_noise popt2, pcov2 = curve_fit(func2, t2, y2_curve_noise) tau2, = popt2 y2_fit = func2(t2, tau2) I would like to use a similar function to represent some data points. However, I'm unable to use this approach

Curve fit an exponential decay function in Python using given data points

我的未来我决定 提交于 2021-02-07 19:17:56
问题 With the curve_fit function in SciPy I'm able to determine the coefficients that represent the curve shown in the plot below. def func2(t, tau): return np.exp(-t / tau) t2 = np.linspace(0, 4, 50) y2 = func2(t2, 1.2) y2_noise = 0.2 * np.random.normal(size=t2.size) y2_curve_noise = y2 + y2_noise popt2, pcov2 = curve_fit(func2, t2, y2_curve_noise) tau2, = popt2 y2_fit = func2(t2, tau2) I would like to use a similar function to represent some data points. However, I'm unable to use this approach

How to efficiently extract values from nested numpy arrays generated by loadmat function?

为君一笑 提交于 2021-02-07 19:13:49
问题 Is there a more efficient method in python to extract data from a nested python list such as A = array([[array([[12000000]])]], dtype=object) . I have been using A[0][0][0][0] , it does not seem to be an efficinet method when you have lots of data like A. I have also used numpy.squeeeze(array([[array([[12000000]])]], dtype=object)) but this gives me array(array([[12000000]]), dtype=object) PS: The nested array was generated by loadmat() function in scipy module to load a .mat file which

How to efficiently extract values from nested numpy arrays generated by loadmat function?

让人想犯罪 __ 提交于 2021-02-07 19:12:11
问题 Is there a more efficient method in python to extract data from a nested python list such as A = array([[array([[12000000]])]], dtype=object) . I have been using A[0][0][0][0] , it does not seem to be an efficinet method when you have lots of data like A. I have also used numpy.squeeeze(array([[array([[12000000]])]], dtype=object)) but this gives me array(array([[12000000]]), dtype=object) PS: The nested array was generated by loadmat() function in scipy module to load a .mat file which

Improving frequency time normalization/hilbert transfer runtimes

Deadly 提交于 2021-02-07 18:42:38
问题 So this is a bit of a nitty gritty question... I have a time-series signal that has a non-uniform response spectrum that I need to whiten. I do this whitening using a frequency time normalization method, where I incrementally filter my signal between two frequency endpoints, using a constant narrow frequency band (~1/4 the lowest frequency end-member). I then find the envelope that characterizes each one of these narrow bands, and normalize that frequency component. I then rebuild my signal

Matplotlib: align origin of right axis with specific left axis value

时间秒杀一切 提交于 2021-02-07 18:19:18
问题 When plotting several y axis in Matplotlib, is there a way to specify how to align the origin (and/or some ytick labels) of the right axis with a specific value of the left axis? Here is my problem: I would like to plot two set of data as well as their difference (basically, I am trying to reproduce this kind of graph). I can reproduce it, but I have to manually adjust the ylim of the right axis so that the origin is aligned with the value I want from the left axis. I putted below an example

Get only “valid” points in 2D interpolation of cloud point using Scipy/Numpy

大憨熊 提交于 2021-02-07 13:34:25
问题 I have a cloud point obtained from photogrammetry from a person's back. I'm trying to interpolate it to get a regular grid, and for that I'm using scipy.interpolate with good results so far. The problem is: the function I'm using ( scipy.interpolate.griddata ) uses the convex hull of the cloudpoint in the plane x,y, thus giving as result some values that don't exist in the original surface, which has a concave perimeter. The following illustration shows the original cloudpoint at the left

Get only “valid” points in 2D interpolation of cloud point using Scipy/Numpy

China☆狼群 提交于 2021-02-07 13:32:34
问题 I have a cloud point obtained from photogrammetry from a person's back. I'm trying to interpolate it to get a regular grid, and for that I'm using scipy.interpolate with good results so far. The problem is: the function I'm using ( scipy.interpolate.griddata ) uses the convex hull of the cloudpoint in the plane x,y, thus giving as result some values that don't exist in the original surface, which has a concave perimeter. The following illustration shows the original cloudpoint at the left

Scipy - find bases of column space of matrix

杀马特。学长 韩版系。学妹 提交于 2021-02-07 13:23:42
问题 I'm trying to code up a simple Simplex algorithm, the first step of which is to find a basic feasible solution: Choose a set B of linearly independent columns of A Set all components of x corresponding to the columns not in B to zero. Solve the m resulting equations to determine the components of x. These are the basic variables. I know the solution will involve using scipy.linalg.svd (or scipy.linalg.lu ) and some numpy.argwhere / numpy.where magic, but I'm not sure exactly how. Does anyone

Scipy: Pearson's correlation always returning 1

爱⌒轻易说出口 提交于 2021-02-07 11:52:26
问题 I am using Python library scipy to calculate Pearson's correlation for two float arrays. The returned value for coefficient is always 1.0, even if the arrays are different. For example: [-0.65499887 2.34644428] [-1.46049758 3.86537321] I am calling the routine in this way: r_row, p_value = scipy.stats.pearsonr(array1, array2) The value of r_row is always 1.0. What am I doing wrong? 回答1: Pearson's correlation coefficient is a measure of how well your data would be fitted by a linear regression