How to parallelize computation on “big data” dictionary of lists?
问题 I have a question here regarding doing calculations on a python dictionary----in this case, the dictionary has millions of keys, and the lists are similarly long. There seems to be disagreement whether one could use parallelization here, so I'll ask the question here more explicitly. Here is the original question: Optimizing parsing of massive python dictionary, multi-threading This is a toy (small) python dictionary: example_dict1 = {'key1':[367, 30, 847, 482, 887, 654, 347, 504, 413, 821],