问题
For couple of days now, I am stuck on my machine learning project. I have a python script that should transform the data for model training by a second script. In the first script is a list of arrays that I would like to dump to the disk, the second unpickle it.
I tried using pickle
several times, but every time the script attempts pickling, I get memory error:
Traceback (most recent call last):
File "Prepare_Input.py", line 354, in <module>
pickle.dump(Total_Velocity_Change, file)
MemoryError
And sometime, this script is forced to stop running with a Killed
message.
I also tried using hickle
however, the script keeps running for long time with hickle
dumping huge file of nearly 10GB (du -sh myfile.hkl
) when left overnight. I am certain there no way the array size can exceeds 1.5GB at most. I can also dump the array to the console (print
). Using hickle
, I had to killed the process to stop the script running.
I also tried all the answers here, unfortunately, none worked for me.
Does anyone have an idea how I can safely dump my file to disk for later loading?
Using dill I get the following errors:
Traceback (most recent call last):
File "Prepare_Input.py", line 356, in <module>
dill.dump(Total_Velocity_Change, fp)
File "/home/akil/Desktop/tmd/venv/lib/python3.7/site-packages/dill/_dill.py", line 259, in dump
Pickler(file, protocol, **_kwds).dump(obj)
File "/home/akil/Desktop/tmd/venv/lib/python3.7/site-packages/dill/_dill.py", line 445, in dump
StockPickler.dump(self, obj)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 735, in save_bytes
self.memoize(obj)
File "/home/akil/anaconda3/lib/python3.7/pickle.py", line 461, in memoize
self.memo[id(obj)] = idx, obj
MemoryError
回答1:
If you want to dump a huge list of arrays, you might want to look at dask
or klepto
. dask
could break up the list into lists of sub-arrays, while klepto
could break up the list into a dict of sub-arrays (with the key indicating the ordering of the sub-arrays).
>>> import klepto as kl
>>> import numpy as np
>>> big = np.random.randn(10,100) # could be a huge array
>>> ar = kl.archives.dir_archive('foo', dict(enumerate(big)), cached=False)
>>> list(ar.keys())
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>>
Then one entry per file is serialized to disk (in output.pkl)
$ ls foo/K_0/
input.pkl output.pkl
来源:https://stackoverflow.com/questions/60577147/python-object-serialization-having-issue-with-pickle-vs-hickle