numpy.unique with order preserved
Question:
['b','b','b','a','a','c','c']
numpy.unique gives
['a','b','c']
How can I get the original order preserved
['b','a','c']
Great answers. Bonus question. Why do none of these methods work with this dataset? http://www.uploadmb.com/dw.php?id=1364341573 Here’s the question numpy sort wierd behavior
Answers:
a = ['b','b','b','a','a','c','c']
[a[i] for i in sorted(np.unique(a, return_index=True)[1])]
Use the return_index
functionality of np.unique
. That returns the indices at which the elements first occurred in the input. Then argsort
those indices.
>>> u, ind = np.unique(['b','b','b','a','a','c','c'], return_index=True)
>>> u[np.argsort(ind)]
array(['b', 'a', 'c'],
dtype='|S1')
unique()
is slow, O(Nlog(N)), but you can do this by following code:
import numpy as np
a = np.array(['b','a','b','b','d','a','a','c','c'])
_, idx = np.unique(a, return_index=True)
print(a[np.sort(idx)])
output:
['b' 'a' 'd' 'c']
Pandas.unique()
is much faster for big array O(N):
import pandas as pd
a = np.random.randint(0, 1000, 10000)
%timeit np.unique(a)
%timeit pd.unique(a)
1000 loops, best of 3: 644 us per loop
10000 loops, best of 3: 144 us per loop
If you’re trying to remove duplication of an already sorted iterable, you can use itertools.groupby
function:
>>> from itertools import groupby
>>> a = ['b','b','b','a','a','c','c']
>>> [x[0] for x in groupby(a)]
['b', 'a', 'c']
This works more like unix ‘uniq’ command, because it assumes the list is already sorted. When you try it on unsorted list you will get something like this:
>>> b = ['b','b','b','a','a','c','c','a','a']
>>> [x[0] for x in groupby(b)]
['b', 'a', 'c', 'a']
If you want to delete repeated entries, like the Unix tool uniq
, this is a solution:
def uniq(seq):
"""
Like Unix tool uniq. Removes repeated entries.
:param seq: numpy.array
:return: seq
"""
diffs = np.ones_like(seq)
diffs[1:] = seq[1:] - seq[:-1]
idx = diffs.nonzero()
return seq[idx]
Use an OrderedDict (faster than a list comprehension)
from collections import OrderedDict
a = ['b','a','b','a','a','c','c']
list(OrderedDict.fromkeys(a))
#List we need to remove duplicates from while preserving order
x = ['key1', 'key3', 'key3', 'key2']
thisdict = dict.fromkeys(x) #dictionary keys are unique and order is preserved
print(list(thisdict)) #convert back to list
output: ['key1', 'key3', 'key2']
['b','b','b','a','a','c','c']
numpy.unique gives
['a','b','c']
How can I get the original order preserved
['b','a','c']
Great answers. Bonus question. Why do none of these methods work with this dataset? http://www.uploadmb.com/dw.php?id=1364341573 Here’s the question numpy sort wierd behavior
a = ['b','b','b','a','a','c','c']
[a[i] for i in sorted(np.unique(a, return_index=True)[1])]
Use the return_index
functionality of np.unique
. That returns the indices at which the elements first occurred in the input. Then argsort
those indices.
>>> u, ind = np.unique(['b','b','b','a','a','c','c'], return_index=True)
>>> u[np.argsort(ind)]
array(['b', 'a', 'c'],
dtype='|S1')
unique()
is slow, O(Nlog(N)), but you can do this by following code:
import numpy as np
a = np.array(['b','a','b','b','d','a','a','c','c'])
_, idx = np.unique(a, return_index=True)
print(a[np.sort(idx)])
output:
['b' 'a' 'd' 'c']
Pandas.unique()
is much faster for big array O(N):
import pandas as pd
a = np.random.randint(0, 1000, 10000)
%timeit np.unique(a)
%timeit pd.unique(a)
1000 loops, best of 3: 644 us per loop
10000 loops, best of 3: 144 us per loop
If you’re trying to remove duplication of an already sorted iterable, you can use itertools.groupby
function:
>>> from itertools import groupby
>>> a = ['b','b','b','a','a','c','c']
>>> [x[0] for x in groupby(a)]
['b', 'a', 'c']
This works more like unix ‘uniq’ command, because it assumes the list is already sorted. When you try it on unsorted list you will get something like this:
>>> b = ['b','b','b','a','a','c','c','a','a']
>>> [x[0] for x in groupby(b)]
['b', 'a', 'c', 'a']
If you want to delete repeated entries, like the Unix tool uniq
, this is a solution:
def uniq(seq):
"""
Like Unix tool uniq. Removes repeated entries.
:param seq: numpy.array
:return: seq
"""
diffs = np.ones_like(seq)
diffs[1:] = seq[1:] - seq[:-1]
idx = diffs.nonzero()
return seq[idx]
Use an OrderedDict (faster than a list comprehension)
from collections import OrderedDict
a = ['b','a','b','a','a','c','c']
list(OrderedDict.fromkeys(a))
#List we need to remove duplicates from while preserving order x = ['key1', 'key3', 'key3', 'key2'] thisdict = dict.fromkeys(x) #dictionary keys are unique and order is preserved print(list(thisdict)) #convert back to list output: ['key1', 'key3', 'key2']