evaluate many monomials at many points

Question:

The following problem concerns evaluating many monomials (x**k * y**l * z**m) at many points.

I would like to compute the “inner power” of two numpy arrays, i.e.,

import numpy

a = numpy.random.rand(10, 3)
b = numpy.random.rand(3, 5)

out = numpy.ones((10, 5))
for i in range(10):
    for j in range(5):
        for k in range(3):
            out[i, j] *= a[i, k]**b[k, j]

print(out.shape)

If instead the line would read

out[i, j] += a[i, k]*b[j, k]

this would be a a number of inner products, computable with a simple dot or einsum.

Is it possible to perform the above loop in just one numpy line?

Asked By: Nico Schlömer

||

Answers:

You can use broadcasting after extending those arrays to 3D versions –

(a[:,:,None]**b[None,:,:]).prod(axis=1)

Simply put –

(a[...,None]**b[None]).prod(1)

Basically, we are keeping the last axis and first axis from the two arrays aligned, while performing element-wise powers between the first and last axes from the two inputs. Schematically put using the given sample on shapes –

  10   x   3   x   1
   1   x   3   x   5
Answered By: Divakar

What about thinking of it in terms of logarithms:

import numpy

a = numpy.random.rand(10, 3)
b = numpy.random.rand(3, 5)

out = np.exp(np.matmul(np.log(a), b))

Since c_ij = prod(a_ik ** b_kj, k=1..K), then log(c_ij) = sum(log(a_ik) * b_ik, k=1..K).

Note: Having zeros in a may mess up the result (also negatives, but then the result wouldn’t be well defined anyway). I have given it a try and it doesn’t seem to actually break somehow; I don’t know if that behavior is guaranteed by NumPy but, to be safe, you can add something at the end like:

out[np.logical_or.reduce(a < eps, axis=1)] = 0
Answered By: jdehesa

Two more solutions:

Inlining

np.array([
    np.prod([a[:, i]**bb[i] for i in range(len(bb))], axis=0)
    for bb in b.T
]).T

and using power.outer:

np.prod([numpy.power.outer(a[:, k], b[k]) for k in range(len(b))], axis=0

Both are a bit slower than the broadcasting solution.

enter image description here


Code to reproduce the plot:

import numpy as np
import perfplot


def loop(a, b):
    m = a.shape[0]
    n = b.shape[1]
    out = np.ones((m, n))
    for i in range(m):
        for j in range(n):
            for k in range(3):
                out[i, j] *= a[i, k] ** b[k, j]
    return out


def broadcasting(a, b):
    return (a[..., None] ** b[None]).prod(1)


def log_exp(a, b):
    neg_a = np.zeros(a.shape, dtype=int)
    neg_a[a < 0.0] = 1
    odd_b = np.zeros(b.shape, dtype=int)
    odd_b[b % 2 == 1] = 1
    negative_count = np.dot(neg_a, odd_b)

    out = (-1) ** negative_count * np.exp(
        np.matmul(np.log(abs(a), where=abs(a) > 0.0), b)
    )

    zero_a = np.zeros(a.shape, dtype=int)
    zero_a[a == 0.0] = 1
    pos_b = np.zeros(b.shape, dtype=int)
    pos_b[b > 0] = 1
    zero_count = np.dot(zero_a, pos_b)
    out[zero_count > 0] = 0.0
    return out


def inline(a, b):
    return np.array(
        [np.prod([a[:, i] ** bb[i] for i in range(len(bb))], axis=0) for bb in b.T]
    ).T


def outer_power(a, b):
    return np.prod([np.power.outer(a[:, k], b[k]) for k in range(len(b))], axis=0)


b = perfplot.bench(
    setup=lambda n: (
        np.random.rand(n, 3) - 0.5,
        np.random.randint(0, 10, (3, n)),
    ),
    n_range=[2**k for k in range(13)],
    kernels=[loop, broadcasting, inline, log_exp, outer_power],
    xlabel="len(a)",
)
b.save("out.png")
b.show()
Answered By: Nico Schlömer
import numpy

a = numpy.random.rand(10, 3)
b = numpy.random.rand(3, 5)

out = [[numpy.prod([a[i, k]**b[k, j] for k in range(3)]) for j in range(5)] for i in range(10)] 
Answered By: Mengliu
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.