If I have a list of dictionaries, say:
[{\'id\': 1, \'name\': \'paul\'},
{\'id\': 2, \'name\': \'john\'}]
and I would like to remove the d
You can try the following:
a = [{'id': 1, 'name': 'paul'},
{'id': 2, 'name': 'john'}]
for e in range(len(a) - 1, -1, -1):
if a[e]['id'] == 2:
a.pop(e)
If You can't pop from the beginning - pop from the end, it won't ruin the for loop.
thelist[:] = [d for d in thelist if d.get('id') != 2]
Edit: as some doubts have been expressed in a comment about the performance of this code (some based on misunderstanding Python's performance characteristics, some on assuming beyond the given specs that there is exactly one dict in the list with a value of 2 for key 'id'), I wish to offer reassurance on this point.
On an old Linux box, measuring this code:
$ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(99)]; import random" "thelist=list(lod); random.shuffle(thelist); thelist[:] = [d for d in thelist if d.get('id') != 2]"
10000 loops, best of 3: 82.3 usec per loop
of which about 57 microseconds for the random.shuffle (needed to ensure that the element to remove is not ALWAYS at the same spot;-) and 0.65 microseconds for the initial copy (whoever worries about performance impact of shallow copies of Python lists is most obviously out to lunch;-), needed to avoid altering the original list in the loop (so each leg of the loop does have something to delete;-).
When it is known that there is exactly one item to remove, it's possible to locate and remove it even more expeditiously:
$ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(99)]; import random" "thelist=list(lod); random.shuffle(thelist); where=(i for i,d in enumerate(thelist) if d.get('id')==2).next(); del thelist[where]"
10000 loops, best of 3: 72.8 usec per loop
(use the next
builtin rather than the .next
method if you're on Python 2.6 or better, of course) -- but this code breaks down if the number of dicts that satisfy the removal condition is not exactly one. Generalizing this, we have:
$ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*3; import random" "thelist=list(lod); where=[i for i,d in enumerate(thelist) if d.get('id')==2]; where.reverse()" "for i in where: del thelist[i]"
10000 loops, best of 3: 23.7 usec per loop
where the shuffling can be removed because there are already three equispaced dicts to remove, as we know. And the listcomp, unchanged, fares well:
$ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*3; import random" "thelist=list(lod); thelist[:] = [d for d in thelist if d.get('id') != 2]"
10000 loops, best of 3: 23.8 usec per loop
totally neck and neck, with even just 3 elements of 99 to be removed. With longer lists and more repetitions, this holds even more of course:
$ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*133; import random" "thelist=list(lod); where=[i for i,d in enumerate(thelist) if d.get('id')==2]; where.reverse()" "for i in where: del thelist[i]"
1000 loops, best of 3: 1.11 msec per loop
$ python -mtimeit -s"lod=[{'id':i, 'name':'nam%s'%i} for i in range(33)]*133; import random" "thelist=list(lod); thelist[:] = [d for d in thelist if d.get('id') != 2]"
1000 loops, best of 3: 998 usec per loop
All in all, it's obviously not worth deploying the subtlety of making and reversing the list of indices to remove, vs the perfectly simple and obvious list comprehension, to possibly gain 100 nanoseconds in one small case -- and lose 113 microseconds in a larger one;-). Avoiding or criticizing simple, straightforward, and perfectly performance-adequate solutions (like list comprehensions for this general class of "remove some items from a list" problems) is a particularly nasty example of Knuth's and Hoare's well-known thesis that "premature optimization is the root of all evil in programming"!-)
Supposed your python version is 3.6 or greater, and that you don't need the deleted item this would be less expensive...
If the dictionaries in the list are unique :
for i in range(len(dicts)):
if dicts[i].get('id') == 2:
del dicts[i]
break
If you want to remove all matched items :
for i in range(len(dicts)):
if dicts[i].get('id') == 2:
del dicts[i]
You can also to this to be sure getting id key won't raise keyerror regardless the python version
if dicts[i].get('id', None) == 2
# assume ls contains your list
for i in range(len(ls)):
if ls[i]['id'] == 2:
del ls[i]
break
Will probably be faster than the list comprehension methods on average because it doesn't traverse the whole list if it finds the item in question early on.
You could try something along the following lines:
def destructively_remove_if(predicate, list):
for k in xrange(len(list)):
if predicate(list[k]):
del list[k]
break
return list
list = [
{ 'id': 1, 'name': 'John' },
{ 'id': 2, 'name': 'Karl' },
{ 'id': 3, 'name': 'Desdemona' }
]
print "Before:", list
destructively_remove_if(lambda p: p["id"] == 2, list)
print "After:", list
Unless you build something akin to an index over your data, I don't think that you can do better than doing a brute-force "table scan" over the entire list. If your data is sorted by the key you are using, you might be able to employ the bisect module to find the object you are looking for somewhat faster.
This is not properly an anwser (as I think you already have some quite good of them), but... have you considered of having a dictionary of <id>:<name>
instead of a list of dictionaries?