I really can not understand what numpy.gradient
function does and how to use it for computation of multivariable function gradient.
For example
You could use scipy.optimize.approx_fprime
f = lambda x: x**2
approx_fprime(np.array([2]), f, epsilon=1e-6) # array([ 4.000001])
Also theano
can compute the gradient automatically
http://deeplearning.net/software/theano/tutorial/gradients.html
Numpy and Scipy are for numerical calculations. Since you want to calculate the gradient of an analytical function, you have to use the Sympy package which supports symbolic mathematics. Differentiation is explained here (you can actually use it in the web console in the left bottom corner).
You can install Sympy under Ubuntu with
sudo apt-get install python-sympy
or under any Linux distribution with pip
sudo pip install sympy
The problem is, that numpy can't give you the derivatives directly and you have two options:
With NUMPY
What you essentially have to do, is to define a grid in three dimension and to evaluate the function on this grid. Afterwards you feed this table of function values to numpy.gradient
to get an array with the numerical derivative for every dimension (variable).
Example from here:
from numpy import *
x,y,z = mgrid[-100:101:25., -100:101:25., -100:101:25.]
V = 2*x**2 + 3*y**2 - 4*z # just a random function for the potential
Ex,Ey,Ez = gradient(V)
Without NUMPY
You could also calculate the derivative yourself by using the centered difference quotient.
This is essentially, what numpy.gradient
is doing for every point of your predefined grid.
Numpy doesn't directly support gradient calculations without creating an entire grid of points. Instead, I would use autodifferentiation See https://code.activestate.com/recipes/580610-auto-differentiation/ for how to do it in Python.