exp

plotting of graph with unknown parameter containing exp

柔情痞子 提交于 2020-01-06 20:08:18
问题 Equation is as shown: Ii=7.5-1.1e-06*exp((Vv+0.3*Ii)/2)+(1.1e-06)-(Vv+0.3*Ii)/271; How can I plot a graph of Ii vs Vv , given Vv with a step size of: Vv=0:1.5:35; Would really appreciate any help thanks 回答1: You can use solve method: Vv_arr = 0:1.5:35; res_arr = []; syms Ii for Vv=Vv_arr sol = solve(7.5-1.1e-06*exp((Vv+0.3*Ii)/2)+(1.1e-06)-(Vv+0.3*Ii)/271 - Ii == 0); res = eval(vpa(sol)); res_arr = [res_arr res]; end plot(Vv_arr, res_arr, 'LineWidth', 2); grid on; xlabel('Vv'); ylabel('Ii');

Perform numpy exp function in-place

荒凉一梦 提交于 2020-01-02 05:36:09
问题 As in title, I need to perform numpy.exp on a very large ndarray, let's say ar , and store the result in ar itself. Can this operation be performed in-place? 回答1: You can use the optional out argument of exp : a = np.array([3.4, 5]) res = np.exp(a, a) print(res is a) print(a) Output: True [ 29.96410005 148.4131591 ] exp(x[, out]) Calculate the exponential of all elements in the input array. Returns out : ndarray Output array, element-wise exponential of x . Here all elements of a will be

sympy hangs when trying to solve a simple algebraic equation

二次信任 提交于 2019-12-31 02:14:15
问题 I recently reinstalled my python environment and a code that used to work very quickly now creeps at best (usually just hangs taking up more and more memory). The point at which the code hangs is: solve(exp(-alpha * x**2) - 0.01, alpha) I've been able to reproduce this problem with a fresh IPython 0.13.1 session: In [1]: from sympy import solve, Symbol, exp In [2]: x = 14.7296138519 In [3]: alpha = Symbol('alpha', real=True) In [4]: solve(exp(-alpha * x**2) - 0.01, alpha) this works for

c++ exp function different results under x64 on i7-3770 and i7-4790

我怕爱的太早我们不能终老 提交于 2019-12-24 08:16:04
问题 When I execute a simple x64 application with the following code, I get different results on Windows PCs with a i7-3770 and i7-4790 CPU. #include <cmath> #include <iostream> #include <limits> void main() { double val = exp(-10.240990982718174); std::cout.precision(std::numeric_limits<double>::max_digits10); std::cout << val; } Result on i7-3770: 3.5677476354876406e-05 Result on i7-4790: 3.5677476354876413e-05 When I modify the code to call unsigned int control_word; _controlfp_s(&control_word,

How to Calculate Aggregated Product Function in SQL Server

馋奶兔 提交于 2019-12-23 02:53:40
问题 I have a table with 2 column: No. Name Serial 1 Tom 1 2 Bob 5 3 Don 3 4 Jim 6 I want to add a column whose a content is multiply Serial column like this: No. Name Serial Multiply 1 Tom 2 2 2 Bob 5 10 3 Don 3 30 4 Jim 6 180 How can i do that? 回答1: Oh, this is a pain. Most databases do not support a product aggregation function. You can emulate it with logs and powers. So, something like this might work: select t.*, (select exp(sum(log(serial))) from table t2 where t2.no <= t.no ) as

Taking logs and adding versus multiplying

你离开我真会死。 提交于 2019-12-22 10:22:20
问题 If I want to take the product of a list of floating point numbers, what's the worst-case/average-case precision lost by adding their logs and then taking exp of the sum as opposed to just multiplying them. Is there ever a case when this is actually more precise? 回答1: Absent any overflow or underflow shenanigans, if a and b are floating-point numbers, then the product a*b will be computed to within a relative error of 1/2 ulp. A crude bound on the relative error after multiplying a chain of N

Using AVX instructions disables exp() optimization?

主宰稳场 提交于 2019-12-21 04:51:40
问题 I am writing a feed forward net in VC++ using AVX intrinsics. I am invoking this code via PInvoke in C#. My performance when calling a function that calculates a large loop including the function exp() is ~1000ms for a loopsize of 160M. As soon as I call any function that uses AVX intrinsics, and then subsequently use exp(), my performance drops to about ~8000ms for the same operation. Note that the function calculating the exp() is standard C, and the call that uses the AVX intrinsics can be

exp() precision between Mac OS and Windows

廉价感情. 提交于 2019-12-20 04:07:17
问题 I got a code here, and when I run them on Win and Mac OS, the precision of the results is different, anyone can help? const double c = 1 - exp(-2.0); double x = (139 + 0.5) / 2282.0; x = ( 1 - exp(-2 * (1 - x))) / c; The results are both 0.979645005277687, but the Hex are different: Win: 3FEF59407B6B6FF1 Mac: 3FEF59407B6B6FF2 How Can I get the same result. 回答1: How Can I get the same result. Unless the math library on OS X uses the very same implementation/algorithm for calculating e ^ x ,

exp() precision between Mac OS and Windows

拥有回忆 提交于 2019-12-20 04:06:04
问题 I got a code here, and when I run them on Win and Mac OS, the precision of the results is different, anyone can help? const double c = 1 - exp(-2.0); double x = (139 + 0.5) / 2282.0; x = ( 1 - exp(-2 * (1 - x))) / c; The results are both 0.979645005277687, but the Hex are different: Win: 3FEF59407B6B6FF1 Mac: 3FEF59407B6B6FF2 How Can I get the same result. 回答1: How Can I get the same result. Unless the math library on OS X uses the very same implementation/algorithm for calculating e ^ x ,

Fast fixed point pow, log, exp and sqrt

瘦欲@ 提交于 2019-12-17 10:25:28
问题 I've got a fixed point class (10.22) and I have a need of a pow, a sqrt, an exp and a log function. Alas I have no idea where to even start on this. Can anyone provide me with some links to useful articles or, better yet, provide me with some code? I'm assuming that once I have an exp function then it becomes relatively easy to implement pow and sqrt as they just become. pow( x, y ) => exp( y * log( x ) ) sqrt( x ) => pow( x, 0.5 ) Its just those exp and log functions that I'm finding