It seems I am getting lost in something potentially silly. I have an n-dimensional numpy array, and I want to multiply it with a vector (1d array) along some dimension (whi
You could build a slice object, and select the desired dimension in that:
import numpy as np
a = np.arange(18).reshape((3,2,3))
b = np.array([1,3])
ss = [None for i in range(a.ndim)]
ss[1] = slice(None) # set the dimension along which to broadcast
print ss # [None, slice(None, None, None), None]
c = a*b[ss]
Utilizing casting and views, instead of actually copying data N times into a new array with appropiate shape (as existing answers do) is way more memory efficient. Here is such a method (based on @ShuxuanXU's code):
def mult_along_axis(A, B, axis):
# ensure we're working with Numpy arrays
A = np.array(A)
B = np.array(B)
# shape check
if axis >= A.ndim:
raise AxisError(axis, A.ndim)
if A.shape[axis] != B.size:
raise ValueError(
"Length of 'A' along the given axis must be the same as B.size"
)
# np.broadcast_to puts the new axis as the last axis, so
# we swap the given axis with the last one, to determine the
# corresponding array shape. np.swapaxes only returns a view
# of the supplied array, so no data is copied unneccessarily.
shape = np.swapaxes(A, A.ndim-1, axis).shape
# Broadcast to an array with the shape as above. Again,
# no data is copied, we only get a new look at the existing data.
B_brc = np.broadcast_to(B, shape)
# Swap back the axes. As before, this only changes our "point of view".
B_brc = np.swapaxes(B_brc, A.ndim-1, axis)
return A * B_brc
Solution Code -
import numpy as np
# Given axis along which elementwise multiplication with broadcasting
# is to be performed
given_axis = 1
# Create an array which would be used to reshape 1D array, b to have
# singleton dimensions except for the given axis where we would put -1
# signifying to use the entire length of elements along that axis
dim_array = np.ones((1,a.ndim),int).ravel()
dim_array[given_axis] = -1
# Reshape b with dim_array and perform elementwise multiplication with
# broadcasting along the singleton dimensions for the final output
b_reshaped = b.reshape(dim_array)
mult_out = a*b_reshaped
Sample run for a demo of the steps -
In [149]: import numpy as np
In [150]: a = np.random.randint(0,9,(4,2,3))
In [151]: b = np.random.randint(0,9,(2,1)).ravel()
In [152]: whos
Variable Type Data/Info
-------------------------------
a ndarray 4x2x3: 24 elems, type `int32`, 96 bytes
b ndarray 2: 2 elems, type `int32`, 8 bytes
In [153]: given_axis = 1
Now, we would like to perform elementwise multiplications along given axis = 1
. Let's create dim_array
:
In [154]: dim_array = np.ones((1,a.ndim),int).ravel()
...: dim_array[given_axis] = -1
...:
In [155]: dim_array
Out[155]: array([ 1, -1, 1])
Finally, reshape b
& perform the elementwise multiplication:
In [156]: b_reshaped = b.reshape(dim_array)
...: mult_out = a*b_reshaped
...:
Check out the whos
info again and pay special attention to b_reshaped
& mult_out
:
In [157]: whos
Variable Type Data/Info
---------------------------------
a ndarray 4x2x3: 24 elems, type `int32`, 96 bytes
b ndarray 2: 2 elems, type `int32`, 8 bytes
b_reshaped ndarray 1x2x1: 2 elems, type `int32`, 8 bytes
dim_array ndarray 3: 3 elems, type `int32`, 12 bytes
given_axis int 1
mult_out ndarray 4x2x3: 24 elems, type `int32`, 96 bytes
You could also use a simple matrices trick
c = np.matmul(a,diag(b))
basically just doing matrix multiplication between a
and a matrix whose diagonals are the elements of b
. Maybe not as efficient but it's a nice single line solution
I got a similar demand when I was working on some numerical calculation.
Let's assume we have two arrays (A and B) and a user-specified 'axis'. A is a multi-dimensional array. B is a 1-d array.
The basic idea is to expand B so that A and B have the same shape. Here is the solution code
import numpy as np
from numpy.core._internal import AxisError
def multiply_along_axis(A, B, axis):
A = np.array(A)
B = np.array(B)
# shape check
if axis >= A.ndim:
raise AxisError(axis, A.ndim)
if A.shape[axis] != B.size:
raise ValueError("'A' and 'B' must have the same length along the given axis")
# Expand the 'B' according to 'axis':
# 1. Swap the given axis with axis=0 (just need the swapped 'shape' tuple here)
swapped_shape = A.swapaxes(0, axis).shape
# 2. Repeat:
# loop through the number of A's dimensions, at each step:
# a) repeat 'B':
# The number of repetition = the length of 'A' along the
# current looping step;
# The axis along which the values are repeated. This is always axis=0,
# because 'B' initially has just 1 dimension
# b) reshape 'B':
# 'B' is then reshaped as the shape of 'A'. But this 'shape' only
# contains the dimensions that have been counted by the loop
for dim_step in range(A.ndim-1):
B = B.repeat(swapped_shape[dim_step+1], axis=0)\
.reshape(swapped_shape[:dim_step+2])
# 3. Swap the axis back to ensure the returned 'B' has exactly the
# same shape of 'A'
B = B.swapaxes(0, axis)
return A * B
And here is an example
In [33]: A = np.random.rand(3,5)*10; A = A.astype(int); A
Out[33]:
array([[7, 1, 4, 3, 1],
[1, 8, 8, 2, 4],
[7, 4, 8, 0, 2]])
In [34]: B = np.linspace(3,7,5); B
Out[34]: array([3., 4., 5., 6., 7.])
In [35]: multiply_along_axis(A, B, axis=1)
Out[34]:
array([[21., 4., 20., 18., 7.],
[ 3., 32., 40., 12., 28.],
[21., 16., 40., 0., 14.]])