I have a quite huge table in python from a .h5 file The start of the table looks somewhat like this:
table =
[WIND REL DIRECTION [deg]] [WIND S
resample is your friend.
idx = pltd.num2date(table.index)
df = pd.DataFrame({'direction': np.random.randn(10),
'speed': np.random.randn(10)},
index=idx)
>>> df
direction speed
2014-05-28 08:53:59.971204+00:00 0.205429 0.699439
2014-05-28 08:54:01.008002+00:00 0.383199 -0.392261
2014-05-28 08:54:04.031995+00:00 -2.146569 -0.325526
2014-05-28 08:54:04.982402+00:00 1.572352 1.289276
2014-05-28 08:54:06.019200+00:00 0.880394 -0.440667
2014-05-28 08:54:11.980795+00:00 -1.343758 0.615725
2014-05-28 08:54:13.017603+00:00 -1.713043 0.552017
2014-05-28 08:54:13.968000+00:00 -0.350017 0.728910
2014-05-28 08:54:15.004798+00:00 -0.619273 0.286762
2014-05-28 08:54:16.041596+00:00 0.459747 0.524788
>>> df.resample('15S', how='mean') # how='mean' is the default here
direction speed
2014-05-28 08:53:45+00:00 0.205429 0.699439
2014-05-28 08:54:00+00:00 -0.388206 0.289639
2014-05-28 08:54:15+00:00 -0.079763 0.405775
Performance is similar to the method provided by @LondonRob. I used a DataFrame with 1 million rows to test.
df = pd.DataFrame({'direction': np.random.randn(1e6), 'speed': np.random.randn(1e6)}, index=pd.date_range(start='2015-1-1', periods=1e6, freq='1S'))
>>> %timeit df.resample('15S')
100 loops, best of 3: 15.6 ms per loop
>>> %timeit df.groupby(pd.TimeGrouper(freq='15S')).mean()
100 loops, best of 3: 15.7 ms per loop