How can I combine multiple .h5 file?

倾然丶 夕夏残阳落幕 提交于 2020-01-06 08:24:29

问题


Everything that is available online is too complicated. My database is large to I exported it in parts. I now have three .h5 file and I would like to combine them into one .h5 file for further work. How can I do it?


回答1:


There are at least 3 ways to combine data from individual HDF5 files into a single file:

  1. Use external links to create a new file that points to the data in your other files (requires pytables/tables module)
  2. Copy the data with the HDF Group utility: h5copy.exe
  3. Copy the data with Python (using h5py or pytables)

An example of external links is available here:
https://stackoverflow.com/a/55399562/10462884
It shows how to create the links and then how to dereference them.

Documentation for h5copy is here:
https://support.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Copy

Copying with h5py or pytables is more involved.




回答2:


For those that prefer using PyTables, I redid my h5py examples to show different ways to copy data between 2 HDF5 files. These examples use the same example HDF5 files as before. Each file only has one dataset. When you have multiple datasets, you can extend this process with walk_nodes() in Pytables.

All methods use glob() to find the HDF5 files used in the operations below.

Method 1: Create External Links
Similar to h5py, it creates 3 Groups in the new HDF5 file, each with an external link to the original data. The data is NOT copied.

import tables as tb
with tb.File('table_links_2.h5',mode='w') as h5fw:
    link_cnt = 0 
    for h5name in glob.glob('file*.h5'):
        link_cnt += 1
        h5fw.create_external_link('/', 'link'+str(link_cnt), h5name+':/')

Method 2: Copy Data 'as-is'
This copies the data from each dataset in the original file to the new file using the original dataset name. Dataset object is the same type as source HDF5 file. In this case, they are PyTable Arrays (because all columns are the same type). The datasets are copied using the name in the source HDF5 so each must have different names. The data is not merged into a single dataset.

with tb.File('table_copy_2.h5',mode='w') as h5fw:
    for h5name in glob.glob('file*.h5'):
        h5fr = tb.File(h5name,mode='r') 
        print (h5fr.root._v_children)
        h5fr.root._f_copy_children(h5fw.root)     

Method 3a: Merge all data into 1 Array
This copies and merges the data from each dataset in the original file into a single dataset in the new file. Again, the data is saved as a PyTables Array. There are no restrictions on the dataset names. First I read the data and append to a Numpy array. Once all files have been processed, the Numpy array is copied to the PyTables Array. This process holds the Numpy array in memory, so may not work for large datasets. You can avoid this limitation by using a Pytables EArray (Enlargeable Array). See Method 3b.

with tb.File('table_merge_2a.h5',mode='w') as h5fw:
    row1 = 0
    for h5name in glob.glob('file*.h5'):
        h5fr = tb.File(h5name,mode='r') 
        dset1 = h5fr.root._f_list_nodes()[0]
        arr_data = dset1[:]
        if row1 == 0 :
           all_data = arr_data.copy()
           row1 += arr_data.shape[0]
        else :
           all_data = np.append(all_data,arr_data,axis=0)
           row1 += arr_data.shape[0]
    tb.Array(h5fw.root,'alldata', obj=all_data )

Method 3b: Merge all data into 1 Enlargeable EArray
This is similar to the method above, but saves the data incrementally in a PyTables EArray. The EArray.append() method is used to add the data. This process reduces the memory issues in Method 3a.

with tb.File('table_merge_2b.h5',mode='w') as h5fw:
    row1 = 0
    for h5name in glob.glob('file*.h5'):
        h5fr = tb.File(h5name,mode='r') 
        dset1 = h5fr.root._f_list_nodes()[0]
        arr_data = dset1[:]
        if row1 == 0 :
           earr = h5fw.create_earray(h5fw.root,'alldata', 
                                     shape=(0,arr_data.shape[1]), obj=arr_data )
        else :
           earr.append(arr_data)
        row1 += arr_data.shape[0]   

Method 4: Merge all data into 1 Table
This example highlights the differences between h5py and PyTables. In h5py, the datasets can reference np.arrays or np.recarrays -- h5py deals with the different dtypes. In Pytables, Arrays (and CArrays and EArrays) reference nd.array data, and Tables reference np.recarray data. This example shows how to convert the nd.array data from the source files into np.recarray data suitable for Table objects. It also shows how to use Table.append() similar to EArray.append() in Method 3b.

with tb.File('table_append_2.h5',mode='w') as h5fw:
    row1 = 0
    for h5name in glob.glob('file*.h5'):
        h5fr = tb.File(h5name,mode='r') 
        dset1 = h5fr.root._f_list_nodes()[0]
        arr_data = dset1[:]
        ds_dt= ([ ('f1', float), ('f2', float), ('f3', float), ('f4', float), ('f5', float) ])
        recarr_data = np.rec.array(arr_data,dtype=ds_dt)
        if row1 == 0: 
            data_table = h5fw.create_table('/','alldata', obj=recarr_data)
        else :
            data_table.append(recarr_data)
        h5fw.flush()
        row1 += arr_data.shape[0]



回答3:


These examples show how to use h5py to copy datasets between 2 HDF5 files. See my other answer for PyTables examples. I created some simple HDF5 files to mimic CSV type data (all floats, but the process is the same if you have mixed data types). Based on your description, each file only has one dataset. When you have multiple datasets, you can extend this process with visititems() in h5py.

Note: code to create the HDF5 files used in the examples is at the end.

All methods use glob() to find the HDF5 files used in the operations below.

Method 1: Create External Links
This results in 3 Groups in the new HDF5 file, each with an external link to the original data. This does not copy the data, but provides access to the data in all files via the links in 1 file.

with h5py.File('table_links.h5',mode='w') as h5fw:
    link_cnt = 0 
    for h5name in glob.glob('file*.h5'):
        link_cnt += 1
        h5fw['link'+str(link_cnt)] = h5py.ExternalLink(h5name,'/')   

Method 2: Copy Data 'as-is'
This copies the data from each dataset in the original file to the new file using the original dataset name. This requires datasets in each file to have different names. The data is not merged into one dataset.

with h5py.File('table_copy.h5',mode='w') as h5fw:
    for h5name in glob.glob('file*.h5'):
        h5fr = h5py.File(h5name,'r') 
        dset1 = list(h5fr.keys())[0]
        arr_data = h5fr[dset1][:]
        h5fw.create_dataset(dset1,data=arr_data)   

Method 3a: Merge all data into 1 Fixed size Dataset
This copies and merges the data from each dataset in the original file into a single dataset in the new file. In this example there are no restrictions on the dataset names. Also, I initially create a large dataset and don't resize. This assumes there are enough rows to hold all merged data. Tests should be added in production work.

with h5py.File('table_merge.h5',mode='w') as h5fw:
    row1 = 0
    for h5name in glob.glob('file*.h5'):
        h5fr = h5py.File(h5name,'r') 
        dset1 = list(h5fr.keys())[0]
        arr_data = h5fr[dset1][:]
        h5fw.require_dataset('alldata', dtype="f",  shape=(50,5), maxshape=(100, 5) )
        h5fw['alldata'][row1:row1+arr_data.shape[0],:] = arr_data[:]
        row1 += arr_data.shape[0]

Method 3b: Merge all data into 1 Resizeable Dataset
This is similar to method above. However, I create a resizeable dataset and enlarge based on the amount of data that is read and added.

with h5py.File('table_merge.h5',mode='w') as h5fw:
    row1 = 0
    for h5name in glob.glob('file*.h5'):
        h5fr = h5py.File(h5name,'r') 
        dset1 = list(h5fr.keys())[0]
        arr_data = h5fr[dset1][:]
        dslen = arr_data.shape[0]
        cols = arr_data.shape[1]
        if row1 == 0: 
            h5fw.create_dataset('alldata', dtype="f",  shape=(dslen,cols), maxshape=(None, cols) )
        if row1+dslen <= len(h5fw['alldata']) :
            h5fw['alldata'][row1:row1+dslen,:] = arr_data[:]
        else :
            h5fw['alldata'].resize( (row1+dslen, cols) )
            h5fw['alldata'][row1:row1+dslen,:] = arr_data[:]
        row1 += dslen

To create the source files read above:

for fcnt in range(1,4,1):
    fname = 'file' + str(fcnt) + '.h5'
    arr = np.random.random(50).reshape(10,5)
    with h5py.File(fname,'w') as h5fw :
        h5fw.create_dataset('data_'+str(fcnt),data=arr)


来源:https://stackoverflow.com/questions/58187004/how-can-i-combine-multiple-h5-file

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!