I have a set of simulation data where I would like to find the lowest slope in n dimensions. The spacing of the data is constant along each dimension, but not all the same (I co
The second derivatives are given by the Hessian matrix. Here is a Python implementation for ND arrays, that consists in applying the np.gradient
twice and storing the output appropriately,
import numpy as np
def hessian(x):
"""
Calculate the hessian matrix with finite differences
Parameters:
- x : ndarray
Returns:
an array of shape (x.dim, x.ndim) + x.shape
where the array[i, j, ...] corresponds to the second derivative x_ij
"""
x_grad = np.gradient(x)
hessian = np.empty((x.ndim, x.ndim) + x.shape, dtype=x.dtype)
for k, grad_k in enumerate(x_grad):
# iterate over dimensions
# apply gradient again to every component of the first derivative.
tmp_grad = np.gradient(grad_k)
for l, grad_kl in enumerate(tmp_grad):
hessian[k, l, :, :] = grad_kl
return hessian
x = np.random.randn(100, 100, 100)
hessian(x)
Note that if you are only interested in the magnitude of the second derivatives, you could use the Laplace operator implemented by scipy.ndimage.filters.laplace, which is the trace (sum of diagonal elements) of the Hessian matrix.
Taking the smallest element of the the Hessian matrix could be used to estimate the lowest slope in any spatial direction.