Group list by values

Question:

Let’s say I have a list like this:

mylist = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]

How can I most elegantly group this to get this list output in Python:

[["A", "C"], ["B"], ["D", "E"]]

So the values are grouped by the secound value but the order is preserved…

Asked By: Veles

||

Answers:

len = max(key for (item, key) in list)
newlist = [[] for i in range(len+1)]
for item,key in list:
  newlist[key].append(item)

You can do it in a single list comprehension, perhaps more elegant but O(n**2):

[[item for (item,key) in list if key==i] for i in range(max(key for (item,key) in list)+1)]
Answered By: sverre

I don’t know about elegant, but it’s certainly doable:

oldlist = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
# change into: list = [["A", "C"], ["B"], ["D", "E"]]

order=[]
dic=dict()
for value,key in oldlist:
  try:
    dic[key].append(value)
  except KeyError:
    order.append(key)
    dic[key]=[value]
newlist=map(dic.get, order)

print newlist

This preserves the order of the first occurence of each key, as well as the order of items for each key. It requires the key to be hashable, but does not otherwise assign meaning to it.

Answered By: Yann Vernier
values = set(map(lambda x:x[1], mylist))
newlist = [[y[0] for y in mylist if y[1]==x] for x in values]
Answered By: Howard
from operator import itemgetter
from itertools import groupby

lki = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
lki.sort(key=itemgetter(1))

glo = [[x for x,y in g]
       for k,g in  groupby(lki,key=itemgetter(1))]

print glo

.

EDIT

Another solution that needs no import , is more readable, keeps the orders, and is 22 % shorter than the preceding one:

oldlist = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]

newlist, dicpos = [],{}
for val,k in oldlist:
    if k in dicpos:
        newlist[dicpos[k]].extend(val)
    else:
        newlist.append([val])
        dicpos[k] = len(dicpos)

print newlist
Answered By: eyquem

Howard’s answer is concise and elegant, but it’s also O(n^2) in the worst case. For large lists with large numbers of grouping key values, you’ll want to sort the list first and then use itertools.groupby:

>>> from itertools import groupby
>>> from operator import itemgetter
>>> seq = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
>>> seq.sort(key = itemgetter(1))
>>> groups = groupby(seq, itemgetter(1))
>>> [[item[0] for item in data] for (key, data) in groups]
[['A', 'C'], ['B'], ['D', 'E']]

Edit:

I changed this after seeing eyequem’s answer: itemgetter(1) is nicer than lambda x: x[1].

Answered By: Robert Rossney
>>> import collections
>>> D1 = collections.defaultdict(list)
>>> for element in L1:
...     D1[element[1]].append(element[0])
... 
>>> L2 = D1.values()
>>> print L2
[['A', 'C'], ['B'], ['D', 'E']]
>>> 
Answered By: dting
>>> xs = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
>>> xs.sort(key=lambda x: x[1])
>>> reduce(lambda l, x: (l.append([x]) if l[-1][0][1] != x[1] else l[-1].append(x)) or l, xs[1:], [[xs[0]]]) if xs else []
[[['A', 0], ['C', 0]], [['B', 1]], [['D', 2], ['E', 2]]]

Basically, if the list is sorted, it is possible to reduce by looking at the last group constructed by the previous steps – you can tell if you need to start a new group, or modify an existing group. The ... or l bit is a trick that enables us to use lambda in Python. (append returns None. It is always better to return something more useful than None, but, alas, such is Python.)

Answered By: Sassa NF

if using convtools library, which provides a lot of data processing primitives and generates ad hoc code under the hood, then:

from convtools import conversion as c

my_list = [["A", 0], ["B", 1], ["C", 0], ["D", 2], ["E", 2]]

# store the converter somewhere because this is where code generation
# takes place
converter = (
    c.group_by(c.item(1))
    .aggregate(c.ReduceFuncs.Array(c.item(0)))
    .gen_converter()
)
assert converter(my_list) == [["A", "C"], ["B"], ["D", "E"]]
Answered By: westandskif

An answer inspired by @Howard’s answer whose answer I liked and spent some time making into a function.

Except for the extra comments and typing hints, the difference here is that i) I use itemgetter which is faster than a lambda function that returns a certain element and ii) I use the built-in function filter instead of using a list comprehension, which returns a generator instead of creating a list object, but also returns the full objects that match the condition (not just their first element).

from operator import itemgetter

def group_by(nested_iterables: Iterable[Iterable], key_index: int) 
    -> List[Tuple[Any, Iterable[Any]]]:
    """ Groups elements nested in <nested_iterables> based on their <key_index>_th element.

    Behaves similarly to itertools.groupby when the input to the itertools function is sorted.

    E.g. If <nested_iterables> = [(1, 2), (2, 3), (5, 2), (9, 3)] and 
    <key_index> = 1, we will return [(2, [(1, 2), (5, 2)]), (3, [(2, 3), (9,3)])].

    Returns:
        A list of (group_key, values) tuples where <values> is an iterator of the iterables in
            <nested_iterables> that all have their <key_index>_th element equal to <group_key>.
    """
    group_keys = set(map(itemgetter(key_index), nested_iterables))
    return [(key, filter(lambda x: x[key_index] == key, nested_iterables))
             for key in group_keys]
Answered By: MattSt
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.