how omit axis on outer substaract numpy

Question:

I am computing a table of euclidean (or other) distances. This is a lookup for distances in a space. For 2-d points the lookup is done thusly table[x1,y1,x2,y2]. For instance table[0,0,0,10] should equal 10 and table[0,0,10,10] would be 14.142135623730951.

I have a working example that I would like to rewrite so that it works on points with an arbitrary number of dimensions. This code works only for two.

import numpy as np

to = np.indices((5, 5)).T
x_shape, y_shape, n_dim = to.shape
x_sub = np.subtract.outer(to[..., 0], to[..., 0])
y_sub = np.subtract.outer(to[..., 1], to[..., 1])
distances = np.sqrt(x_sub ** 2 + y_sub ** 2)

n_test = 5
for i in np.random.choice(x_shape, n_test):
    for j in np.random.choice(y_shape, n_test):
        for k in np.random.choice(x_shape, n_test):
            for l in np.random.choice(y_shape, n_test):
                d = np.sqrt((i - k) ** 2 + (j - l) ** 2)
                assert (distances[i, j, k, l] == d)
print('33[92m TEST PASSES')

Now, I could simply run a loop and do np.subtract.outer on each dimm, but I would like to know if there is a way to do outer subtract on the indices directly. (As in np.subtract.outer(to, to))
When I try this the result has a shape of (5, 5, 2, 5, 5, 2) while what i need is (5, 5, 5, 5, 2). Anyone know how to do this?

Solution derived from accepted answer:

import numpy as np 
to = np.indices((5, 5, 5)).T
sh = list(to.shape)
n_dims = sh[-1]
t = to.reshape(sh[:-1] + [1] * n_dims + [n_dims]) - to.reshape([1] * n_dims + sh[:-1] + [n_dims])
distances = np.sqrt(np.sum(np.power(t, 2), axis=-1))

n_test = 100
idxs = np.indices(distances.shape).T.reshape(-1, n_dims * 2)
for idx in np.random.permutation(idxs)[:n_test]:
    assert distances[tuple(idx)] == np.linalg.norm(idx[:n_dims] - idx[n_dims:])

print('33[92m TEST PASSES')
Asked By: imkded5

||

Answers:

I’ll illustrate with a (3,4) array:

In [9]: x = np.arange(12).reshape(3,4)
In [10]: np.subtract.outer(x,x).shape
Out[10]: (3, 4, 3, 4)

But with broadcasting, we can do an ‘outer’ on first dimensions, while "sharing" the second. That can be generalized.

In [11]: (x[:,None,:]-x[None,:,:]).shape
Out[11]: (3, 3, 4)

outer can be done with broadcasting

In [12]: np.allclose(np.subtract.outer(x,x), x[:,:,None,None]-x[None,None,:,:])
Out[12]: True
Answered By: hpaulj
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.