How to read a large file - line by line?

前端 未结 11 877
一整个雨季
一整个雨季 2020-11-21 11:44

I want to iterate over each line of an entire file. One way to do this is by reading the entire file, saving it to a list, then going over the line of interest. This method

相关标签:
11条回答
  • 2020-11-21 11:54
    #Using a text file for the example
    with open("yourFile.txt","r") as f:
        text = f.readlines()
    for line in text:
        print line
    
    • Open your file for reading (r)
    • Read the whole file and save each line into a list (text)
    • Loop through the list printing each line.

    If you want, for example, to check a specific line for a length greater than 10, work with what you already have available.

    for line in text:
        if len(line) > 10:
            print line
    
    0 讨论(0)
  • 2020-11-21 12:00

    Need to frequently read a large file from last position reading ?

    I have created a script used to cut an Apache access.log file several times a day. So I needed to set a position cursor on last line parsed during last execution. To this end, I used file.seek() and file.seek() methods which allows the storage of the cursor in file.

    My code :

    ENCODING = "utf8"
    CURRENT_FILE_DIR = os.path.dirname(os.path.abspath(__file__))
    
    # This file is used to store the last cursor position
    cursor_position = os.path.join(CURRENT_FILE_DIR, "access_cursor_position.log")
    
    # Log file with new lines
    log_file_to_cut = os.path.join(CURRENT_FILE_DIR, "access.log")
    cut_file = os.path.join(CURRENT_FILE_DIR, "cut_access", "cut.log")
    
    # Set in from_line 
    from_position = 0
    try:
        with open(cursor_position, "r", encoding=ENCODING) as f:
            from_position = int(f.read())
    except Exception as e:
        pass
    
    # We read log_file_to_cut to put new lines in cut_file
    with open(log_file_to_cut, "r", encoding=ENCODING) as f:
        with open(cut_file, "w", encoding=ENCODING) as fw:
            # We set cursor to the last position used (during last run of script)
            f.seek(from_position)
            for line in f:
                fw.write("%s" % (line))
    
        # We save the last position of cursor for next usage
        with open(cursor_position, "w", encoding=ENCODING) as fw:
            fw.write(str(f.tell()))
    
    0 讨论(0)
  • 2020-11-21 12:00

    Best way to read large file, line by line is to use python enumerate function

    with open(file_name, "rU") as read_file:
        for i, row in enumerate(read_file, 1):
            #do something
            #i in line of that line
            #row containts all data of that line
    
    0 讨论(0)
  • 2020-11-21 12:01

    this is a possible way of reading a file in python:

    f = open(input_file)
    for line in f:
        do_stuff(line)
    f.close()
    

    it does not allocate a full list. It iterates over the lines.

    0 讨论(0)
  • 2020-11-21 12:03

    Katrielalex provided the way to open & read one file.

    However the way your algorithm goes it reads the whole file for each line of the file. That means the overall amount of reading a file - and computing the Levenshtein distance - will be done N*N if N is the amount of lines in the file. Since you're concerned about file size and don't want to keep it in memory, I am concerned about the resulting quadratic runtime. Your algorithm is in the O(n^2) class of algorithms which often can be improved with specialization.

    I suspect that you already know the tradeoff of memory versus runtime here, but maybe you would want to investigate if there's an efficient way to compute multiple Levenshtein distances in parallel. If so it would be interesting to share your solution here.

    How many lines do your files have, and on what kind of machine (mem & cpu power) does your algorithm have to run, and what's the tolerated runtime?

    Code would look like:

    with f_outer as open(input_file, 'r'):
        for line_outer in f_outer:
            with f_inner as open(input_file, 'r'):
                for line_inner in f_inner:
                    compute_distance(line_outer, line_inner)
    

    But the questions are how do you store the distances (matrix?) and can you gain an advantage of preparing e.g. the outer_line for processing, or caching some intermediate results for reuse.

    0 讨论(0)
  • 2020-11-21 12:04

    Some context up front as to where I am coming from. Code snippets are at the end.

    When I can, I prefer to use an open source tool like H2O to do super high performance parallel CSV file reads, but this tool is limited in feature set. I end up writing a lot of code to create data science pipelines before feeding to H2O cluster for the supervised learning proper.

    I have been reading files like 8GB HIGGS dataset from UCI repo and even 40GB CSV files for data science purposes significantly faster by adding lots of parallelism with the multiprocessing library's pool object and map function. For example clustering with nearest neighbor searches and also DBSCAN and Markov clustering algorithms requires some parallel programming finesse to bypass some seriously challenging memory and wall clock time problems.

    I usually like to break the file row-wise into parts using gnu tools first and then glob-filemask them all to find and read them in parallel in the python program. I use something like 1000+ partial files commonly. Doing these tricks helps immensely with processing speed and memory limits.

    The pandas dataframe.read_csv is single threaded so you can do these tricks to make pandas quite faster by running a map() for parallel execution. You can use htop to see that with plain old sequential pandas dataframe.read_csv, 100% cpu on just one core is the actual bottleneck in pd.read_csv, not the disk at all.

    I should add I'm using an SSD on fast video card bus, not a spinning HD on SATA6 bus, plus 16 CPU cores.

    Also, another technique that I discovered works great in some applications is parallel CSV file reads all within one giant file, starting each worker at different offset into the file, rather than pre-splitting one big file into many part files. Use python's file seek() and tell() in each parallel worker to read the big text file in strips, at different byte offset start-byte and end-byte locations in the big file, all at the same time concurrently. You can do a regex findall on the bytes, and return the count of linefeeds. This is a partial sum. Finally sum up the partial sums to get the global sum when the map function returns after the workers finished.

    Following is some example benchmarks using the parallel byte offset trick:

    I use 2 files: HIGGS.csv is 8 GB. It is from the UCI machine learning repository. all_bin .csv is 40.4 GB and is from my current project. I use 2 programs: GNU wc program which comes with Linux, and the pure python fastread.py program which I developed.

    HP-Z820:/mnt/fastssd/fast_file_reader$ ls -l /mnt/fastssd/nzv/HIGGS.csv
    -rw-rw-r-- 1 8035497980 Jan 24 16:00 /mnt/fastssd/nzv/HIGGS.csv
    
    HP-Z820:/mnt/fastssd$ ls -l all_bin.csv
    -rw-rw-r-- 1 40412077758 Feb  2 09:00 all_bin.csv
    
    ga@ga-HP-Z820:/mnt/fastssd$ time python fastread.py --fileName="all_bin.csv" --numProcesses=32 --balanceFactor=2
    2367496
    
    real    0m8.920s
    user    1m30.056s
    sys 2m38.744s
    
    In [1]: 40412077758. / 8.92
    Out[1]: 4530501990.807175
    

    That’s some 4.5 GB/s, or 45 Gb/s, file slurping speed. That ain’t no spinning hard disk, my friend. That’s actually a Samsung Pro 950 SSD.

    Below is the speed benchmark for the same file being line-counted by gnu wc, a pure C compiled program.

    What is cool is you can see my pure python program essentially matched the speed of the gnu wc compiled C program in this case. Python is interpreted but C is compiled, so this is a pretty interesting feat of speed, I think you would agree. Of course, wc really needs to be changed to a parallel program, and then it would really beat the socks off my python program. But as it stands today, gnu wc is just a sequential program. You do what you can, and python can do parallel today. Cython compiling might be able to help me (for some other time). Also memory mapped files was not explored yet.

    HP-Z820:/mnt/fastssd$ time wc -l all_bin.csv
    2367496 all_bin.csv
    
    real    0m8.807s
    user    0m1.168s
    sys 0m7.636s
    
    
    HP-Z820:/mnt/fastssd/fast_file_reader$ time python fastread.py --fileName="HIGGS.csv" --numProcesses=16 --balanceFactor=2
    11000000
    
    real    0m2.257s
    user    0m12.088s
    sys 0m20.512s
    
    HP-Z820:/mnt/fastssd/fast_file_reader$ time wc -l HIGGS.csv
    11000000 HIGGS.csv
    
    real    0m1.820s
    user    0m0.364s
    sys 0m1.456s
    

    Conclusion: The speed is good for a pure python program compared to a C program. However, it’s not good enough to use the pure python program over the C program, at least for linecounting purpose. Generally the technique can be used for other file processing, so this python code is still good.

    Question: Does compiling the regex just one time and passing it to all workers will improve speed? Answer: Regex pre-compiling does NOT help in this application. I suppose the reason is that the overhead of process serialization and creation for all the workers is dominating.

    One more thing. Does parallel CSV file reading even help? Is the disk the bottleneck, or is it the CPU? Many so-called top-rated answers on stackoverflow contain the common dev wisdom that you only need one thread to read a file, best you can do, they say. Are they sure, though?

    Let’s find out:

    HP-Z820:/mnt/fastssd/fast_file_reader$ time python fastread.py --fileName="HIGGS.csv" --numProcesses=16 --balanceFactor=2
    11000000
    
    real    0m2.256s
    user    0m10.696s
    sys 0m19.952s
    
    HP-Z820:/mnt/fastssd/fast_file_reader$ time python fastread.py --fileName="HIGGS.csv" --numProcesses=1 --balanceFactor=1
    11000000
    
    real    0m17.380s
    user    0m11.124s
    sys 0m6.272s
    

    Oh yes, yes it does. Parallel file reading works quite well. Well there you go!

    Ps. In case some of you wanted to know, what if the balanceFactor was 2 when using a single worker process? Well, it’s horrible:

    HP-Z820:/mnt/fastssd/fast_file_reader$ time python fastread.py --fileName="HIGGS.csv" --numProcesses=1 --balanceFactor=2
    11000000
    
    real    1m37.077s
    user    0m12.432s
    sys 1m24.700s
    

    Key parts of the fastread.py python program:

    fileBytes = stat(fileName).st_size  # Read quickly from OS how many bytes are in a text file
    startByte, endByte = PartitionDataToWorkers(workers=numProcesses, items=fileBytes, balanceFactor=balanceFactor)
    p = Pool(numProcesses)
    partialSum = p.starmap(ReadFileSegment, zip(startByte, endByte, repeat(fileName))) # startByte is already a list. fileName is made into a same-length list of duplicates values.
    globalSum = sum(partialSum)
    print(globalSum)
    
    
    def ReadFileSegment(startByte, endByte, fileName, searchChar='\n'):  # counts number of searchChar appearing in the byte range
        with open(fileName, 'r') as f:
            f.seek(startByte-1)  # seek is initially at byte 0 and then moves forward the specified amount, so seek(5) points at the 6th byte.
            bytes = f.read(endByte - startByte + 1)
            cnt = len(re.findall(searchChar, bytes)) # findall with implicit compiling runs just as fast here as re.compile once + re.finditer many times.
        return cnt
    

    The def for PartitionDataToWorkers is just ordinary sequential code. I left it out in case someone else wants to get some practice on what parallel programming is like. I gave away for free the harder parts: the tested and working parallel code, for your learning benefit.

    Thanks to: The open-source H2O project, by Arno and Cliff and the H2O staff for their great software and instructional videos, which have provided me the inspiration for this pure python high performance parallel byte offset reader as shown above. H2O does parallel file reading using java, is callable by python and R programs, and is crazy fast, faster than anything on the planet at reading big CSV files.

    0 讨论(0)
提交回复
热议问题