Numpy: Average of values corresponding to unique coordinate positions

Question:

So, I have been browsing stackoverflow for quite some time now, but I can’t seem to find the solution for my problem

Consider this

import numpy as np
coo = np.array([[1, 2], [2, 3], [3, 4], [3, 4], [1, 2], [5, 6], [1, 2]])
values = np.array([1, 2, 4, 2, 1, 6, 1])

The coo array contains the (x, y) coordinate positions
x = (1, 2, 3, 3, 1, 5, 1)
y = (2, 3, 4, 4, 2, 6, 2)

and the values array some sort of data for this grid point.

Now I want to get the average of all values for each unique grid point.
For example the coordinate (1, 2) occurs at the positions (0, 4, 6), so for this point I want values[[0, 4, 6]].

How could I get this for all unique grid points?

Asked By: HansSnah

||

Answers:

You can sort coo with np.lexsort to bring the duplicate ones in succession. Then run np.diff along the rows to get a mask of starts of unique XY’s in the sorted version. Using that mask, you can create an ID array that would have the same ID for the duplicates. The ID array can then be used with np.bincount to get the summation of all values with the same ID and also their counts and thus the average values, as the final output. Here’s an implementation to go along those lines –

# Use lexsort to bring duplicate coo XY's in succession
sortidx = np.lexsort(coo.T)
sorted_coo =  coo[sortidx]

# Get mask of start of each unique coo XY
unqID_mask = np.append(True,np.any(np.diff(sorted_coo,axis=0),axis=1))

# Tag/ID each coo XY based on their uniqueness among others
ID = unqID_mask.cumsum()-1

# Get unique coo XY's
unq_coo = sorted_coo[unqID_mask]

# Finally use bincount to get the summation of all coo within same IDs 
# and their counts and thus the average values
average_values = np.bincount(ID,values[sortidx])/np.bincount(ID)

Sample run –

In [65]: coo
Out[65]: 
array([[1, 2],
       [2, 3],
       [3, 4],
       [3, 4],
       [1, 2],
       [5, 6],
       [1, 2]])

In [66]: values
Out[66]: array([1, 2, 4, 2, 1, 6, 1])

In [67]: unq_coo
Out[67]: 
array([[1, 2],
       [2, 3],
       [3, 4],
       [5, 6]])

In [68]: average_values
Out[68]: array([ 1.,  2.,  3.,  6.])
Answered By: Divakar

You can use where:

>>> values[np.where((coo == [1, 2]).all(1))].mean()
1.0
Answered By: Holt

It is very likely going to be faster to flatten your indices, i.e.:

flat_index = coo[:, 0] * np.max(coo[:, 1]) + coo[:, 1]

then use np.unique on it:

unq, unq_idx, unq_inv, unq_cnt = np.unique(flat_index,
                                           return_index=True,
                                           return_inverse=True,
                                           return_counts=True)
unique_coo = coo[unq_idx]
unique_mean = np.bincount(unq_inv, values) / unq_cnt

than the similar approach using lexsort.

But under the hood the method is virtually the same.

Answered By: Jaime

This is a simple one-liner using the numpy_indexed package (disclaimer: I am its author):

import numpy_indexed as npi
unique, mean = npi.group_by(coo).mean(values)

Should be comparable to the currently accepted answer in performance, as it does similar things under the hood; but all in a well tested package with a nice interface.

Answered By: Eelco Hoogendoorn

Another way to do it is using JAX unique and grad. This approach might be particularly fast because it allows you to run on an accelerator (CPU, GPU, or TPU).

import functools
import jax
import jax.numpy as jnp


@jax.grad
def _unique_sum(unique_values: jnp.ndarray, unique_inverses: jnp.ndarray, values: jnp.ndarray):
    errors = unique_values[unique_inverses] - values
    return -0.5*jnp.dot(errors, errors)


@functools.partial(jax.jit, static_argnames=['size'])
def unique_mean(indices, values, size):
    unique_indices, unique_inverses, unique_counts = jnp.unique(indices, axis=0, return_inverse=True, return_counts=True, size=size)
    unique_values = jnp.zeros(unique_indices.shape[0], dtype=float)
    return unique_indices, _unique_sum(unique_values, unique_inverses, values) / unique_counts


coo = jnp.array([[1, 2], [2, 3], [3, 4], [3, 4], [1, 2], [5, 6], [1, 2]])
values = jnp.array([1, 2, 4, 2, 1, 6, 1])
unique_coo, unique_mean = unique_mean(coo, values, size=4)
print(unique_mean.block_until_ready())

The only weird thing is the size argument since JAX requires all array sizes to be fixed / known beforehand. If you make size too small it will throw out good results, too large it will return nan’s.

Answered By: Chris Flesher
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.