问题
I have a nested list (list of list) and I want to remove the duplicates but I'm getting an error. This is an example:
images = [
[
{
"image_link": "1969.1523.001.aa.cs.jpg",
"catalogue_number": "1969.1523",
"dataset_name": "marine-transportation-transports-maritimes.xml"
},
{
"image_link": "1969.1523.001.aa.cs.jpg",
"catalogue_number": "1969.1523",
"dataset_name": "railway-transportation-transports-ferroviaires.xml"
}
],
[
{
"image_link": "1969.1523.001.aa.cs.jpg",
"catalogue_number": "1969.1523",
"dataset_name": "marine-transportation-transports-maritimes.xml"
},
{
"image_link": "1969.1523.001.aa.cs.jpg",
"catalogue_number": "1969.1523",
"dataset_name": "railway-transportation-transports-ferroviaires.xml"
}
],
[
{
"image_link": "1969.1523.001.aa.cs.jpg",
"catalogue_number": "1969.1523",
"dataset_name": "marine-transportation-transports-maritimes.xml"
},
{
"image_link": "1969.1523.001.aa.cs.jpg",
"catalogue_number": "1969.1523",
"dataset_name": "railway-transportation-transports-ferroviaires.xml"
}
]
]
So at the final this images
will only contains
[
[
{
"image_link": "1969.1523.001.aa.cs.jpg",
"catalogue_number": "1969.1523",
"dataset_name": "marine-transportation-transports-maritimes.xml"
},
{
"image_link": "1969.1523.001.aa.cs.jpg",
"catalogue_number": "1969.1523",
"dataset_name": "railway-transportation-transports-ferroviaires.xml"
}
]
]
I'm using the set
function
set.__doc__
'set() -> new empty set object\nset(iterable) -> new set object\n\nBuild an unor
dered collection of unique elements.'
my trace log:
list(set(images))
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: unhashable type: 'list'
To make it simpler how can I remove all the duplicate in this example
example = [ [{'a':1, 'b':2}, 'w', 2], [{'a':1, 'b':2}, 'w', 2] ]
#result
#example = [[{'a':1, 'b':2}, 'w', 2] ]
回答1:
The set
and dict
containers rely on hashing of data. Other mutable containers like list
(and the set
and dict
themselves) cannot be hashed. They may be changed later on (mutable), so a constant hash value makes no sense.
But you could transform all your data to (nested) tuples and finally into a set
. Since tuple
is an immutable container - and your data is hashable (strings) - it can work. Here's a nasty one-liner for your special images case that does the trick:
images_Set = set([tuple([tuple(sorted(image_dict.items()))
for image_dict in inner_list]) for inner_list in images])
and
print(images_set)
prints
{((('catalogue_number', '1969.1523'),
('dataset_name', 'marine-transportation-transports-maritimes.xml'),
('image_link', '1969.1523.001.aa.cs.jpg')),
(('catalogue_number', '1969.1523'),
('dataset_name', 'railway-transportation-transports-ferroviaires.xml'),
('image_link', '1969.1523.001.aa.cs.jpg')))}
EDIT: There's no guaranteed order for the items
function of dictionaries. Hence, I also added sorted
to ensure an order.
回答2:
Seems like you want something like this,
>>> example = [ [{'a':1, 'b':2}, 'w', 2], [{'a':1, 'b':2}, 'w', 2] ]
>>> l = []
>>> for i in example:
if i not in l:
l.append(i)
>>> l
[[{'b': 2, 'a': 1}, 'w', 2]]
回答3:
You can use compiler.ast.flatten
to flatten your list and then convert your dictionary to a hashable object to grub the sets then convert back to dict , Just with one list comprehension :
>>> from compiler.ast import flatten
>>> [dict(item) for item in set(tuple(i.items()) for i in flatten(images))]
[{'image_link': '1969.1523.001.aa.cs.jpg', 'catalogue_number': '1969.1523', 'dataset_name': 'marine-transportation-transports-maritimes.xml'}, {'image_link': '1969.1523.001.aa.cs.jpg', 'catalogue_number': '1969.1523', 'dataset_name': 'railway-transportation-transports-ferroviaires.xml'}]
来源:https://stackoverflow.com/questions/28694845/get-unique-values-from-a-nested-list-in-python