Binning column with python pandas

前端 未结 2 1254
礼貌的吻别
礼貌的吻别 2020-11-22 00:36

I have a Data Frame column with numeric values:

df[\'percentage\'].head()
46.5
44.2
100.0
42.12

I want to see the column as bin counts:

相关标签:
2条回答
  • 2020-11-22 00:52

    You can use pandas.cut:

    bins = [0, 1, 5, 10, 25, 50, 100]
    df['binned'] = pd.cut(df['percentage'], bins)
    print (df)
       percentage     binned
    0       46.50   (25, 50]
    1       44.20   (25, 50]
    2      100.00  (50, 100]
    3       42.12   (25, 50]
    

    bins = [0, 1, 5, 10, 25, 50, 100]
    labels = [1,2,3,4,5,6]
    df['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels)
    print (df)
       percentage binned
    0       46.50      5
    1       44.20      5
    2      100.00      6
    3       42.12      5
    

    Or numpy.searchsorted:

    bins = [0, 1, 5, 10, 25, 50, 100]
    df['binned'] = np.searchsorted(bins, df['percentage'].values)
    print (df)
       percentage  binned
    0       46.50       5
    1       44.20       5
    2      100.00       6
    3       42.12       5
    

    ...and then value_counts or groupby and aggregate size:

    s = pd.cut(df['percentage'], bins=bins).value_counts()
    print (s)
    (25, 50]     3
    (50, 100]    1
    (10, 25]     0
    (5, 10]      0
    (1, 5]       0
    (0, 1]       0
    Name: percentage, dtype: int64
    

    s = df.groupby(pd.cut(df['percentage'], bins=bins)).size()
    print (s)
    percentage
    (0, 1]       0
    (1, 5]       0
    (5, 10]      0
    (10, 25]     0
    (25, 50]     3
    (50, 100]    1
    dtype: int64
    

    By default cut return categorical.

    Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data, operations in categorical.

    0 讨论(0)
  • 2020-11-22 01:04

    Using numba module for speed up.

    On big datasets (500k >) pd.cut can be quite slow for binning data.

    I wrote my own function in numba with just in time compilation, which is roughly 16x faster:

    from numba import njit
    
    @njit
    def cut(arr):
        bins = np.empty(arr.shape[0])
        for idx, x in enumerate(arr):
            if (x >= 0) & (x < 1):
                bins[idx] = 1
            elif (x >= 1) & (x < 5):
                bins[idx] = 2
            elif (x >= 5) & (x < 10):
                bins[idx] = 3
            elif (x >= 10) & (x < 25):
                bins[idx] = 4
            elif (x >= 25) & (x < 50):
                bins[idx] = 5
            elif (x >= 50) & (x < 100):
                bins[idx] = 6
            else:
                bins[idx] = 7
    
        return bins
    
    cut(df['percentage'].to_numpy())
    
    # array([5., 5., 7., 5.])
    

    Optional: you can also map it to bins as strings:

    a = cut(df['percentage'].to_numpy())
    
    conversion_dict = {1: 'bin1',
                       2: 'bin2',
                       3: 'bin3',
                       4: 'bin4',
                       5: 'bin5',
                       6: 'bin6',
                       7: 'bin7'}
    
    bins = list(map(conversion_dict.get, a))
    
    # ['bin5', 'bin5', 'bin7', 'bin5']
    

    Speed comparison:

    # create dataframe of 8 million rows for testing
    dfbig = pd.concat([df]*2000000, ignore_index=True)
    
    dfbig.shape
    
    # (8000000, 1)
    
    %%timeit
    cut(dfbig['percentage'].to_numpy())
    
    # 38 ms ± 616 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
    
    %%timeit
    bins = [0, 1, 5, 10, 25, 50, 100]
    labels = [1,2,3,4,5,6]
    pd.cut(dfbig['percentage'], bins=bins, labels=labels)
    
    # 215 ms ± 9.76 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
    
    0 讨论(0)
提交回复
热议问题