Multiply (a x a) square matrix by (a x c x d) matrix

Question:

So let’s say I have a list of (c x d) matrices. Like say I have a of them. And I have a coefficients for each matrix.

Is there a quick way in NumPy to scalar-multiply each matrix by its coefficient at once while still keeping the tensor data structure, or do I need to manually go through in a for loop i.e. X = np.array([np.multiply(coefs[i], X[i]) for i in range(len(coefs))])

i.e. X.shape = (3, 4, 5), coefs.shape = (3).

Asked By: Andrew Latham

||

Answers:

X = np.array([[[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]],
              [[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]],
              [[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]])

coeffs = np.array([2,4,6])

You need to add axes to coeffs so it will broadcast in the dimension(s) you want.

>>> X * coeffs[:, np.newaxis, np.newaxis]
array([[[2, 2, 2, 2, 2],
        [2, 2, 2, 2, 2],
        [2, 2, 2, 2, 2],
        [2, 2, 2, 2, 2]],

       [[4, 4, 4, 4, 4],
        [4, 4, 4, 4, 4],
        [4, 4, 4, 4, 4],
        [4, 4, 4, 4, 4]],

       [[6, 6, 6, 6, 6],
        [6, 6, 6, 6, 6],
        [6, 6, 6, 6, 6],
        [6, 6, 6, 6, 6]]])
>>> 

The np.newaxis‘s allow the values of coeffs to line up with the first dimension of X and then they are broadcast across the remaining dimensions.

Answered By: wwii

Solution:

This is the classic problem of going the pythonic way. Following 1-liner does the trick using the powerful einstein sum, np.einsum(), function with the rich framework of subscript labels that allow for broadcasting of dimensions and controlling the output:

np.einsum('ijk,i...->ijk', X, coeffs)

Subscript String:

  • Comma , separates out the subscript labels for the first operand, on the left, from that of the second operand, to the right.
  • Subscript label after the -> symbol gives the dimensions of the output.
  • Ellipsis ... would allow for broadcasting the coeffs vector into the extra 2 dimensions of the X matrix (... is actually optional for a more explicit specification of broadcasting)

It takes a little getting used to but has many matrix-vector operations, like trace, diag, inner/outer product, element-wise multiplication, etc., all lumped into this powerful formulation.

In words:

So the string essentially says that take all the dimensions of X matrix and multiply it element-wise with the coeffs vector (broadcasted across the extra dimensions of X) and generate an output matrix of the same dimensions as that of X.

Output:

>>> np.einsum( 'ijk,i->ijk', np.ones((2,2,2)), np.array([2,4]))

array([[[ 2.,  2.],
        [ 2.,  2.]],

       [[ 4.,  4.],
        [ 4.,  4.]]])

In the 2D case, elementwise along either axes:

>>> np.einsum( 'ij,i->ij', np.ones((2,2)), np.array([2,4]))
array([[ 2.,  2.],
       [ 4.,  4.]]) 

>>> np.einsum( 'ij,j->ij', np.ones((2,2)), np.array([2,4]))
array([[ 2.,  4.],
       [ 2.,  4.]])

Alternative:

Offcourse you can manually broadcast the vector across extra dimensions using np.newaxis parameter and simple multiplication:

X * coeff[:, np.newaxis, np.newaxis]

Timing:

Using ipython line magic function %timeit we can actually see that the einsum might be a little more expensive for large data sets:

>>> %timeit np.einsum('ijk,i...->ijk',np.ones((10,100,100)),np.ones(10))
1000 loops, best of 3: 209 µs per loop

>>> %timeit np.ones((10,100,100))*np.ones(10)[:,np.newaxis, np.newaxis]
1000 loops, best of 3: 129 µs per loop

Tradeoff then is in the versatility that the einstein sum offers.

P.S. If you are a Matlab user obsessed with the rich indexing framework that it provides, you may be interested in checking this Numpy-Matlab comparison page out

Answered By: Pacific Stickler
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.