PyTables - big memory consumption using cols method

时光总嘲笑我的痴心妄想 提交于 2019-12-11 18:48:28

问题


What is the purpose for using cols method in Pytables? I have got big dataset and I am interested in reading only one column from that dataset.

These two methods gives me same time, but totally different variable memory consumption:

import tables
from sys import getsizeof

f = tables.open_file(myhdf5_path, 'r')

# These two methods takes the same amount of time
x = f.root.set1[:500000]['param1']
y = f.root.set1.cols.param1[:500000]

# But totally different memory consumption:
print(getsizeof(x)) # gives me 96
print(getsizeof(y)) # gives me 2000096

They are both the same numpy array data type. Can anybody explain me what is the purpose of using cols method?

%time x = f.root.set1[:500000]['param1']  # gives ~7ms
%time y = f.root.set1.cols.param1[:500000]  # gives also about 7ms

回答1:


Your question caught my curiosity. I typically use table.read(field='name') because it compliments the other table.read_ methods I use (for example: .read_where() and .read_coordinates()).

After a reviewing the docs, I found at least 4 ways to read one column of table data with PyTables. You showed 2, and there are 2 more:
table.read(field='name')
table.col('name') (singular)

I ran some tests with all 4, plus 2 tests on the entire table (dataset) for additional comparisons. I called getsizeof() for all 6 objects, and the size varies based on method. Although all 4 behave the same with numpy indexing, I suspect there's a difference in the returned object. However, I'm not a PyTables developer, so this is more inference than fact. It could also be that getsizeof() interprets the object differently.

Code Below:

import tables as tb
import numpy as np
from sys import getsizeof

# Create h5 file with 1 dataset

h5f = tb.open_file('SO_55254831.h5', 'w')

mydtype = np.dtype([('param1',float),('param2',float),('param3',float)])

arr = np.array(np.arange(3.*500000.).reshape(500000,3))
recarr = np.core.records.array(arr,dtype=mydtype)

h5f.create_table('/', 'set1', obj=recarr )

# Close, then Reopen file READ ONLY
h5f.close()

h5f = tb.open_file('SO_55254831.h5', 'r')

testds_1 = h5f.root.set1
print ("\nFOR: testds_1 = h5f.root.set1")
print (testds_1.dtype)
print (testds_1.shape)
print (getsizeof(testds_1)) # gives 128

testds_2 = h5f.root.set1.read()
print ("\nFOR: testds_2 = h5f.root.set1.read()")
print (getsizeof(testds_2)) # gives 12000096

x = h5f.root.set1[:500000]['param1']
print ("\nFOR: x = h5f.root.set1[:500000]['param1']")
print(getsizeof(x)) # gives 96

print ("\nFOR: y = h5f.root.set1.cols.param1[:500000]")
y = h5f.root.set1.cols.param1[:500000]
print(getsizeof(y)) # gives 4000096

print ("\nFOR: z = h5f.root.set1.read(stop=500000,field='param1')")
z = h5f.root.set1.read(stop=500000,field='param1')
print(getsizeof(z)) # also gives 4000096

print ("\nFOR: a = h5f.root.set1.col('param1')")
a = h5f.root.set1.col('param1')
print(getsizeof(a)) # also gives 4000096

h5f.close()

Output from Above:

FOR: testds_1 = h5f.root.set1
[('param1', '<f8'), ('param2', '<f8'), ('param3', '<f8')]
(500000,)
128

FOR: testds_2 = h5f.root.set1.read()
12000096

FOR: x = h5f.root.set1[:500000]['param1']
96

FOR: y = h5f.root.set1.cols.param1[:500000]
4000096

FOR: z = h5f.root.set1.read(stop=500000,field='param1')
4000096

FOR: a = h5f.root.set1.col('param1')
4000096


来源:https://stackoverflow.com/questions/55254831/pytables-big-memory-consumption-using-cols-method

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!