Let\'s say I have a list like this:
mylist = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
>>> import collections
>>> D1 = collections.defaultdict(list)
>>> for element in L1:
... D1[element[1]].append(element[0])
...
>>> L2 = D1.values()
>>> print L2
[['A', 'C'], ['B'], ['D', 'E']]
>>>
>>> xs = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
>>> xs.sort(key=lambda x: x[1])
>>> reduce(lambda l, x: (l.append([x]) if l[-1][0][1] != x[1] else l[-1].append(x)) or l, xs[1:], [[xs[0]]]) if xs else []
[[['A', 0], ['C', 0]], [['B', 1]], [['D', 2], ['E', 2]]]
Basically, if the list is sorted, it is possible to reduce
by looking at the last group constructed by the previous steps - you can tell if you need to start a new group, or modify an existing group. The ... or l
bit is a trick that enables us to use lambda
in Python. (append
returns None
. It is always better to return something more useful than None
, but, alas, such is Python.)
I don't know about elegant, but it's certainly doable:
oldlist = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
# change into: list = [["A", "C"], ["B"], ["D", "E"]]
order=[]
dic=dict()
for value,key in oldlist:
try:
dic[key].append(value)
except KeyError:
order.append(key)
dic[key]=[value]
newlist=map(dic.get, order)
print newlist
This preserves the order of the first occurence of each key, as well as the order of items for each key. It requires the key to be hashable, but does not otherwise assign meaning to it.
from operator import itemgetter
from itertools import groupby
lki = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
lki.sort(key=itemgetter(1))
glo = [[x for x,y in g]
for k,g in groupby(lki,key=itemgetter(1))]
print glo
.
EDIT
Another solution that needs no import , is more readable, keeps the orders, and is 22 % shorter than the preceding one:
oldlist = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
newlist, dicpos = [],{}
for val,k in oldlist:
if k in dicpos:
newlist[dicpos[k]].extend(val)
else:
newlist.append([val])
dicpos[k] = len(dicpos)
print newlist
Howard's answer is concise and elegant, but it's also O(n^2) in the worst case. For large lists with large numbers of grouping key values, you'll want to sort the list first and then use itertools.groupby
:
>>> from itertools import groupby
>>> from operator import itemgetter
>>> seq = [["A",0], ["B",1], ["C",0], ["D",2], ["E",2]]
>>> seq.sort(key = itemgetter(1))
>>> groups = groupby(seq, itemgetter(1))
>>> [[item[0] for item in data] for (key, data) in groups]
[['A', 'C'], ['B'], ['D', 'E']]
Edit:
I changed this after seeing eyequem's answer: itemgetter(1)
is nicer than lambda x: x[1]
.
len = max(key for (item, key) in list)
newlist = [[] for i in range(len+1)]
for item,key in list:
newlist[key].append(item)
You can do it in a single list comprehension, perhaps more elegant but O(n**2):
[[item for (item,key) in list if key==i] for i in range(max(key for (item,key) in list)+1)]