Sorting in pandas for large datasets

前端 未结 5 1408
故里飘歌
故里飘歌 2020-12-08 11:21

I would like to sort my data by a given column, specifically p-values. However, the issue is that I am not able to load my entire data into memory. Thus, the following doesn

相关标签:
5条回答
  • 2020-12-08 11:31

    If your csv file contains only structured data, I would suggest approach using only linux commands.

    Assume csv file contains two columns, COL_1 and P_VALUE:

    map.py:

    import sys
    for line in sys.stdin:
        col_1, p_value = line.split(',')
        print "%f,%s" % (p_value, col_1)
    

    then the following linux command will generate the csv file with p_value sorted:

    cat input.csv | ./map.py | sort > output.csv
    

    If you're familiar with hadoop, using the above map.py also adding a simple reduce.py will generate the sorted csv file via hadoop streaming system.

    0 讨论(0)
  • 2020-12-08 11:34

    Blaze might be the tool for you with the ability to work with pandas and csv files out of core. http://blaze.readthedocs.org/en/latest/ooc.html

    import blaze
    import pandas as pd
    d = blaze.Data('my-large-file.csv')
    d.P_VALUE.sort()  # Uses Chunked Pandas
    

    For faster processing, load it into a database first which blaze can control. But if this is a one off and you have some time then the posted code should do it.

    0 讨论(0)
  • 2020-12-08 11:35

    In the past, I've used Linux's pair of venerable sort and split utilities, to sort massive files that choked pandas.

    I don't want to disparage the other answer on this page. However, since your data is text format (as you indicated in the comments), I think it's a tremendous complication to start transferring it into other formats (HDF, SQL, etc.), for something that GNU/Linux utilities have been solving very efficiently for the last 30-40 years.


    Say your file is called stuff.csv, and looks like this:

    4.9,3.0,1.4,0.6
    4.8,2.8,1.3,1.2
    

    Then the following command will sort it by the 3rd column:

    sort --parallel=8 -t . -nrk3 stuff.csv
    

    Note that the number of threads here is set to 8.


    The above will work with files that fit into the main memory. When your file is too large, you would first split it into a number of parts. So

    split -l 100000 stuff.csv stuff
    

    would split the file into files of length at most 100000 lines.

    Now you would sort each file individually, as above. Finally, you would use mergesort, again through (waith for it...) sort:

    sort -m sorted_stuff_* > final_sorted_stuff.csv
    

    Finally, if your file is not in CSV (say it is a tgz file), then you should find a way to pipe a CSV version of it into split.

    0 讨论(0)
  • 2020-12-08 11:44

    As I referred in the comments, this answer already provides a possible solution. It is based on the HDF format.

    About the sorting problem, there are at least three possible ways to solve it with that approach.

    First, you can try to use pandas directly, querying the HDF-stored-DataFrame.

    Second, you can use PyTables, which pandas uses under the hood.

    Francesc Alted gives a hint in the PyTables mailing list:

    The simplest way is by setting the sortby parameter to true in the Table.copy() method. This triggers an on-disk sorting operation, so you don't have to be afraid of your available memory. You will need the Pro version for getting this capability.

    In the docs, it says:

    sortby : If specified, and sortby corresponds to a column with an index, then the copy will be sorted by this index. If you want to ensure a fully sorted order, the index must be a CSI one. A reverse sorted copy can be achieved by specifying a negative value for the step keyword. If sortby is omitted or None, the original table order is used

    Third, still with PyTables, you can use the method Table.itersorted().

    From the docs:

    Table.itersorted(sortby, checkCSI=False, start=None, stop=None, step=None)

    Iterate table data following the order of the index of sortby column. The sortby column must have associated a full index.


    Another approach consists in using a database in between. The detailed workflow can be seen in this IPython Notebook published at plot.ly.

    This allows to solve the sorting problem, along with other data analyses that are possible with pandas. It looks like it was created by the user chris, so all the credit goes to him. I am copying here the relevant parts.

    Introduction

    This notebook explores a 3.9Gb CSV file.

    This notebook is a primer on out-of-memory data analysis with

    • pandas: A library with easy-to-use data structures and data analysis tools. Also, interfaces to out-of-memory databases like SQLite.
    • IPython notebook: An interface for writing and sharing python code, text, and plots.
    • SQLite: An self-contained, server-less database that's easy to set-up and query from Pandas.
    • Plotly: A platform for publishing beautiful, interactive graphs from Python to the web.

    Requirements

    import pandas as pd
    from sqlalchemy import create_engine # database connection 
    

    Import the CSV data into SQLite

    1. Load the CSV, chunk-by-chunk, into a DataFrame
    2. Process the data a bit, strip out uninteresting columns
    3. Append it to the SQLite database
    disk_engine = create_engine('sqlite:///311_8M.db') # Initializes database with filename 311_8M.db in current directory
    
    chunksize = 20000
    index_start = 1
    
    for df in pd.read_csv('311_100M.csv', chunksize=chunksize, iterator=True, encoding='utf-8'):
    
        # do stuff   
    
        df.index += index_start
    
        df.to_sql('data', disk_engine, if_exists='append')
        index_start = df.index[-1] + 1
    

    Query value counts and order the results

    Housing and Development Dept receives the most complaints

    df = pd.read_sql_query('SELECT Agency, COUNT(*) as `num_complaints`'
                           'FROM data '
                           'GROUP BY Agency '
                           'ORDER BY -num_complaints', disk_engine)
    

    Limiting the number of sorted entries

    What's the most 10 common complaint in each city?

    df = pd.read_sql_query('SELECT City, COUNT(*) as `num_complaints` '
                                'FROM data '
                                'GROUP BY `City` '
                       'ORDER BY -num_complaints '
                       'LIMIT 10 ', disk_engine)
    

    Possibly related and useful links

    • Pandas: in memory sorting hdf5 files
    • ptrepack sortby needs 'full' index
    • http://pandas.pydata.org/pandas-docs/stable/cookbook.html#hdfstore
    • http://www.pytables.org/usersguide/optimization.html
    0 讨论(0)
  • 2020-12-08 11:55

    Here is my Honest sugg./ Three options you can do.

    1. I like Pandas for its rich doc and features but I been suggested to use NUMPY as it feel faster comparatively for larger datasets. You can think of using other tools as well for easier job.

    2. In case you are using Python3, you can break your big data chunk into sets and do Congruent Threading. I am too lazy for this and it does nt look cool, you see Panda, Numpy, Scipy are build with Hardware design perspectives to enable multi threading I believe.

    3. I prefer this, this is easy and lazy technique acc. to me. Check the document at http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort.html

    You can also use 'kind' parameter in your pandas-sort function you are using.

    Godspeed my friend.

    0 讨论(0)
提交回复
热议问题