I have a set of simulation data where I would like to find the lowest slope in n dimensions. The spacing of the data is constant along each dimension, but not all the same (I co
Slopes, Hessians and Laplacians are related, but are 3 different things.
Start with 2d: a function( x, y ) of 2 variables, e.g. a height map of a range of hills,
slopes aka gradients are direction vectors, a direction and length at each point x y
.
This can be given by 2 numbers dx dy
in cartesian coordinates,
or an angle θ and length sqrt( dx^2 + dy^2 )
in polar coordinates.
Over a whole range of hills, we get a
vector field.
Hessians describe curvature near x y
, e.g. a paraboloid or a saddle,
with 4 numbers: dxx dxy dyx dyy
.
a Laplacian is 1 number, dxx + dyy
, at each point x y
.
Over a range of hills, we get a
scalar field.
(Functions or hills with Laplacian = 0
are particularly smooth.)
Slopes are linear fits and Hessians quadratic fits, for tiny steps h
near a point xy
:
f(xy + h) ~ f(xy)
+ slope . h -- dot product, linear in both slope and h
+ h' H h / 2 -- quadratic in h
Here xy
, slope
and h
are vectors of 2 numbers,
and H
is a matrix of 4 numbers dxx dxy dyx dyy
.
N-d is similar: slopes are direction vectors of N numbers, Hessians are matrices of N^2 numbers, and Laplacians 1 number, at each point.
(You might find better answers over on math.stackexchange .)