This question is somehow related to "Concatenate a large number of HDF5 files".
I have several huge HDF5 files (~20GB compressed), which could not fit the RAM. Each of them stores several pandas.DataFrame
s of identical format and with indexes that do not overlap.
I'd like to concatenate them to have a single HDF5 file with all DataFrames properly concatenated. One way to do this is to read each of them chunk-by-chunk and then save to a single file, but indeed it would take quite a lot of time.
Are there any special tools or methods to do this without iterating through files?
see docs here for the odo
project (formerly into
). Note if you use the into
library, then the argument order has been switched (that was the motivation for changing the name, to avoid confusion!)
You can basically do:
from odo import odo
odo('hdfstore://path_store_1::table_name',
'hdfstore://path_store_new_name::table_name')
doing multiple operations like this will append to the rhs store.
This will automatically do the chunk operations for you.
来源:https://stackoverflow.com/questions/28918851/concatenate-two-big-pandas-hdfstore-hdf5-files