I am trying to calculate the first and second order moments for a portfolio of stocks (i.e. expected return and standard deviation).
expected_returns_annual
In NumPy, a transpose .T
reverses the order of dimensions, which means that it doesn't do anything to your one-dimensional array weights
.
This is a common source of confusion for people coming from Matlab, in which one-dimensional arrays do not exist. See Transposing a NumPy Array for some earlier discussion of this.
np.dot(x,y)
has complicated behavior on higher-dimensional arrays, but its behavior when it's fed two one-dimensional arrays is very simple: it takes the inner product. If we wanted to get the equivalent result as a matrix product of a row and column instead, we'd have to write something like
np.asscalar(x @ y[:, np.newaxis])
adding a trailing dimension to y
to turn it into a "column", multiplying, and then converting our one-element array back into a scalar. But np.dot(x,y)
is much faster and more efficient, so we just use that.
Edit: actually, this was dumb on my part. You can, of course, just write matrix multiplication x @ y
to get equivalent behavior to np.dot
for one-dimensional arrays, as tel's excellent answer points out.
np.dot
are not greatAs Dominique Paul points out, np.dot
has very heterogenous behavior depending on the shapes of the inputs. Adding to the confusion, as the OP points out in his question, given that weights
is a 1D array, np.array_equal(weights, weights.T)
is True
(array_equal
tests for equality of both value and shape).
np.matmul
or the equivalent @
insteadIf you are someone just starting out with Numpy, my advice to you would be to ditch np.dot
completely. Don't use it in your code at all. Instead, use np.matmul
, or the equivalent operator @
. The behavior of @
is more predictable than that of np.dot
, while still being convenient to use. For example, you would get the same dot product for the two 1D
arrays you have in your code like so:
returns = expected_returns_annual @ weights
You can prove to yourself that this gives the same answer as np.dot
with this assert
:
assert expected_returns_annual @ weights == expected_returns_annual.dot(weights)
Conceptually, @
handles this case by promoting the two 1D
arrays to appropriate 2D
arrays (though the implementation doesn't necessarily do this). For example, if you have x
with shape (N,)
and y
with shape (M,)
, if you do x @ y
the shapes will be promoted such that:
x.shape == (1, N)
y.shape == (M, 1)
matmul
/@
Here's what the docs have to say about matmul/@ and the shapes of inputs/outputs:
- If both arguments are 2-D they are multiplied like conventional matrices.
- If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
- If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
- If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.
@
over dot
As hpaulj points out in the comments, np.array_equal(x.dot(y), x @ y)
for all x
and y
that are 1D
or 2D
arrays. So why do I (and why should you) prefer @
? I think the best argument for using @
is that it helps to improve your code in small but significant ways:
@
is explicitly a matrix multiplication operator. x @ y
will raise an error if y
is a scalar, whereas dot
will make the assumption that you actually just wanted elementwise multiplication. This can potentially result in a hard-to-localize bug in which dot
silently returns a garbage result (I've personally run into that one). Thus, @
allows you to be explicit about your own intent for the behavior of a line of code.
Because @
is an operator, it has some nice short syntax for coercing various sequence types into arrays, without having to explicitly cast them. For example, [0,1,2] @ np.arange(3)
is valid syntax.
[0,1,2].dot(arr)
is obviously not valid, np.dot([0,1,2], arr)
is valid (though more verbose than using @
).When you do need to extend your code to deal with many matrix multiplications instead of just one, the ND
cases for @
are a conceptually straightforward generalization/vectorization of the lower-D
cases.
The shape of weights.T should be (,5) and not (5,),
suggests some confusion over the shape
attribute. shape
is an ordinary Python tuple, i.e. just a set of numbers, one for each dimension of the array. That's analogous to the size
of a MATLAB matrix.
(5,)
is just the way of displaying a 1 element tuple. The ,
is required because of older Python history of using ()
as a simple grouping.
In [22]: tuple([5])
Out[22]: (5,)
Thus the ,
in (5,)
does not have a special numpy
meaning, and
In [23]: (,5)
File "<ipython-input-23-08574acbf5a7>", line 1
(,5)
^
SyntaxError: invalid syntax
A key difference between numpy
and MATLAB is that arrays can have any number of dimensions (upto 32). MATLAB has a lower boundary of 2.
The result is that a 5 element numpy
array can have shapes (5,)
, (1,5)
, (5,1)
, (1,5,1)`, etc.
The handling of a 1d weight
array in your example is best explained the np.dot
documentation. Describing it as inner product
seems clear enough to me. But I'm also happy with the
sum product over the last axis of
a
and the second-to-last axis ofb
description, adjusted for the case where b
has only one axis.
(5,) with (5,n) => (n,) # 5 is the common dimension
(n,5) with (5,) => (n,)
(n,5) with (5,1) => (n,1)
In:
(x1,...,xn' * (R1,...,Rn)
are you missing a )
?
(x1,...,xn)' * (R1,...,Rn)
And the *
means matrix product? Not elementwise product (.*
in MATLAB)? (R1,...,Rn)
would have size (n,1). (x1,...,xn)'
size (1,n). The product (1,1)
.
By the way, that raises another difference. MATLAB expands dimensions to the right (n,1,1...). numpy
expands them to the left (1,1,n) (if needed by broadcasting). The initial dimensions are the outermost ones. That's not as critical a difference as the lower size 2 boundary, but shouldn't be ignored.
I had the same question some time ago. It seems that when one of your matrices is one dimensional, then numpy will figure out automatically what you are trying to do.
The documentation for the dot function has a more specific explanation of the logic applied:
If both a and b are 1-D arrays, it is inner product of vectors (without complex conjugation).
If both a and b are 2-D arrays, it is matrix multiplication, but using matmul or a @ b is preferred.
If either a or b is 0-D (scalar), it is equivalent to multiply and using numpy.multiply(a, b) or a * b is preferred.
If a is an N-D array and b is a 1-D array, it is a sum product over the last axis of a and b.
If a is an N-D array and b is an M-D array (where M>=2), it is a sum product over the last axis of a and the second-to-last axis of b: