Deserialization of large numpy arrays using pickle is order of magnitude slower than using numpy
问题 I am deserializing large numpy arrays (500MB in this example) and I find the results vary by orders of magnitude between approaches. Below are the 3 approaches I've timed. I'm receiving the data from the multiprocessing.shared_memory package, so the data comes to me as a memoryview object. But in these simple examples, I just pre-create a byte array to run the test. I wonder if there are any mistakes in these approaches, or if there are other techniques I didn't try. Deserialization in Python