Approach #1 Here's one with array data -
a = df.values.T
df_out = pd.DataFrame(a[~np.isnan(a)].reshape(a.shape[0],-1).T)
Sample run -
In [450]: df
Out[450]:
0 1 2
0 1.0 NaN NaN
1 9.0 7.0 8.0
2 NaN NaN NaN
3 NaN 5.0 7.0
In [451]: a = df.values.T
In [452]: pd.DataFrame(a[~np.isnan(a)].reshape(a.shape[0],-1).T)
Out[452]:
0 1 2
0 1.0 7.0 8.0
1 9.0 5.0 7.0
Approach #2 As it turns out, we already have an utility for it : justify -
In [1]: df
Out[1]:
0 1 2
0 1.0 NaN NaN
1 9.0 7.0 8.0
2 NaN NaN NaN
3 NaN 5.0 7.0
In [2]: pd.DataFrame(justify(df.values, invalid_val=np.nan, axis=0, side='up')[:2])
Out[2]:
0 1 2
0 1.0 7.0 8.0
1 9.0 5.0 7.0
Benchmarking
Approaches -
def app0(df): # @jezrael's soln
return df.apply(lambda x: pd.Series(x.dropna().values))
def app1(df): # Proposed in this post
a = df.values.T
return pd.DataFrame(a[~np.isnan(a)].reshape(a.shape[0],-1).T)
def app2(df): # Proposed in this post
a = df.values
return pd.DataFrame(justify(a, invalid_val=np.nan, axis=0, side='up')[:5])
def app3(df): # @piRSquared's soln-1
v = df.values
r = np.arange(v.shape[1])[None, :]
a = np.isnan(v).argsort(0)
return pd.DataFrame(v[a[:5], r], columns=df.columns)
def app4(df): # @piRSquared's soln-2
return pd.DataFrame(
(lambda a, s: a[~np.isnan(a)].reshape(-1, s, order='F'))
(df.values.ravel('F'), df.shape[1]),
columns=df.columns
)
Timings -
In [513]: # Setup input dataframe with exactly 5 non-NaNs per col
...: m,n = 500,100
...: N = 5
...: a = np.full((m,n), np.nan)
...: row_idx = np.random.rand(m,n).argsort(0)[:N]
...: a[row_idx, np.arange(n)] = np.random.randint(0,9,(N,n))
...: df = pd.DataFrame(a)
...:
In [572]: %timeit app0(df)
...: %timeit app1(df)
...: %timeit app2(df)
...: %timeit app3(df)
...: %timeit app4(df)
...:
10 loops, best of 3: 46.1 ms per loop
10000 loops, best of 3: 132 µs per loop
1000 loops, best of 3: 554 µs per loop
1000 loops, best of 3: 446 µs per loop
10000 loops, best of 3: 148 µs per loop