Faster numpy-solution instead of itertools.combinations?

Question:

I’m using itertools.combinations() as follows:

import itertools
import numpy as np

L = [1,2,3,4,5]
N = 3

output = np.array([a for a in itertools.combinations(L,N)]).T

Which yields me the output I need:

array([[1, 1, 1, 1, 1, 1, 2, 2, 2, 3],
       [2, 2, 2, 3, 3, 4, 3, 3, 4, 4],
       [3, 4, 5, 4, 5, 5, 4, 5, 5, 5]])

I’m using this expression repeatedly and excessively in a multiprocessing environment and I need it to be as fast as possible.

From this post I understand that itertools-based code isn’t the fastest solution and using numpy could be an improvement, however I’m not good enough at numpy optimazation tricks to understand and adapt the iterative code that’s written there or to come up with my own optimization.

Any help would be greatly appreciated.

EDIT:

L comes from a pandas dataframe, so it can as well be seen as a numpy array:

L = df.L.values
Asked By: Khris

||

Answers:

This is is most certainly not faster than itertools.combinations but it is vectorized numpy:

def nd_triu_indices(T,N):
    o=np.array(np.meshgrid(*(np.arange(len(T)),)*N))
    return np.array(T)[o[...,np.all(o[1:]>o[:-1],axis=0)]]

 %timeit np.array(list(itertools.combinations(T,N))).T
The slowest run took 4.40 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8.6 µs per loop

%timeit nd_triu_indices(T,N)
The slowest run took 4.64 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 52.4 µs per loop

Not sure if this is vectorizable another way, or if one of the optimization wizards around here can make this method faster.

EDIT: Came up with another way, but still not faster than combinations:

%timeit np.array(T)[np.array(np.where(np.fromfunction(lambda *i: np.all(np.array(i)[1:]>np.array(i)[:-1], axis=0),(len(T),)*N,dtype=int)))]
The slowest run took 7.78 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 34.3 µs per loop
Answered By: Daniel F

Here’s one that’s slightly faster than itertools UPDATE: and one (nump2) that’s actually quite a bit faster:

import numpy as np
import itertools
import timeit

def nump(n, k, i=0):
    if k == 1:
        a = np.arange(i, i+n)
        return tuple([a[None, j:] for j in range(n)])
    template = nump(n-1, k-1, i+1)
    full = np.r_[np.repeat(np.arange(i, i+n-k+1),
                           [t.shape[1] for t in template])[None, :],
                 np.c_[template]]
    return tuple([full[:, j:] for j in np.r_[0, np.add.accumulate(
        [t.shape[1] for t in template[:-1]])]])

def nump2(n, k):
    a = np.ones((k, n-k+1), dtype=int)
    a[0] = np.arange(n-k+1)
    for j in range(1, k):
        reps = (n-k+j) - a[j-1]
        a = np.repeat(a, reps, axis=1)
        ind = np.add.accumulate(reps)
        a[j, ind[:-1]] = 1-reps[1:]
        a[j, 0] = j
        a[j] = np.add.accumulate(a[j])
    return a

def itto(L, N):
    return np.array([a for a in itertools.combinations(L,N)]).T

k = 6
n = 12
N = np.arange(n)

assert np.all(nump2(n,k) == itto(N,k))

print('numpy    ', timeit.timeit('f(a,b)', number=100, globals={'f':nump, 'a':n, 'b':k}))
print('numpy 2  ', timeit.timeit('f(a,b)', number=100, globals={'f':nump2, 'a':n, 'b':k}))
print('itertools', timeit.timeit('f(a,b)', number=100, globals={'f':itto, 'a':N, 'b':k}))

Timings:

k = 3, n = 50
numpy     0.06967267207801342
numpy 2   0.035096961073577404
itertools 0.7981023890897632

k = 3, n = 10
numpy     0.015058324905112386
numpy 2   0.0017436158377677202
itertools 0.004743851954117417

k = 6, n = 12
numpy     0.03546895203180611
numpy 2   0.00997065706178546
itertools 0.05292179994285107
Answered By: Paul Panzer

I know this question is old, but I have been working on it recently, and it still might help. From my (pretty extensive) testing, I have found that first generating combinations of each index, and then using these indexes to slice the array, is much faster than directly making combinations from the array. I’m sure that using @Paul Panzer’s nump2 function to generate these indices could be even faster.

Here is an example:

import numpy as np
from math import factorial
import itertools as iters
from timeit import timeit
from perfplot import show

def combinations_iter(array:np.ndarray, r:int = 3) -> np.ndarray:
    return np.array([*iters.combinations(array, r = r)], dtype = array.dtype)

def combinations_iter_idx(array:np.ndarray, r:int = 3) -> np.ndarray:
    n_items = array.shape[0]
    num_combinations = factorial(n_items)//(factorial(n_items-r)*factorial(r))
    combination_idx = np.fromiter(
        iters.chain.from_iterable(iters.combinations(np.arange(n_items, dtype = np.int64), r = r)),
        dtype = np.int64,
        count = num_combinations*r,
    ).reshape(-1,r)
    return array[combination_idx]

show(
    setup = lambda n: np.random.uniform(0,100,(n,3)),
    kernels = [combinations_iter, combinations_iter_idx],
    labels = ['pure itertools', 'itertools for index'],
    n_range = np.geomspace(5,300,10, dtype = np.int64),
    xlabel = "n",
    logx = True,
    logy = False,
    equality_check = np.allclose,
    show_progress = True,
    max_time = None,
    time_unit = "ms",
)

Timing of combinations using index, and combinations directly from the array

It is clear that the indexing method is much faster.

Answered By: milandeleev