Why does this python multiprocessing script slow down after a while?

后端 未结 2 1944
夕颜
夕颜 2021-01-02 05:07

Building on the script from this answer, I have the following scenario: A folder containing 2500 large text files (~ 55Mb each), all tab delimited. Web logs, basically.

相关标签:
2条回答
  • 2021-01-02 05:28

    Hashing is a relatively simple task, and modern CPUs are very fast, compared to the speed of spinning disks. A quick-and-dirty benchmark on a i7 shows that it can hash about 450 MB/s using MD5, or 290 MB/s using SHA-1. Comparatively, spinning disk have a typical (sequencial raw read) speed of about 70-150 MB/s. This means that, even ignoring the overhead of the filesystem and eventual disk seeks, the CPU can hash a file about 3x faster than the disk can read it.

    The performance boost you get on processing the first files probably happens because the first files are cached in memory by the operating system, so no disk I/O happens. This can be confirmed by either:

    • rebooting the server, thus flushing the cache
    • filling the cache with something else, by reading enough large files from the disk
    • listening closely for the absence of disk seeks while processing the first files

    Now, since the performance bottleneck for hashing files is the disk, performing the hashing in multiple processes or threads is useless, because they'll all use the same disk. As @Max Noel mentioned, it can actually lower performance, because you'll be reading several files in parallel, so your disk will have to seek between the files. The performance will also vary depending on the I/O scheduler of the operating system you're using, as he mentioned.

    Now, if you're still generating data, you have some possible solutions:

    • Use a faster disk, or a SSD, as @Max Noel suggested.
    • Read from multiple disks - either in different filesystems or in a single filesystem over RAID
    • Split the task over multiple machines (with a single or multiple disks each)

    These solutions, however, are useless if all you want to do is hash those 2500 files and you already have them on a single disk. Reading them from the disk to other disks and then performing the hashing is slower, since you'll be reading the files twice, and you can hash as fast as you can read them.

    Finally, based on @yaccz 's idea, I guess you could have avoided the trouble of writing a program to perform the hashing if you had installed cygwin binaries of find, xargs and md5sum.

    0 讨论(0)
  • 2021-01-02 05:29

    Why do things simple when one can make them complicated?

    mount the drives via smbfs or whatnot on linux host and run

    #! /bin/sh
    
    SRC="" # FIXME
    DST="" # FIXME
    
    convert_line() {
        new_line=`echo $i | cut -f 1 -d "\t"`
        f2=`echo $i | cut -f 2 -d "\t"`
        frest=`echo $i | cut -f 1,2 --complement -d "\t"`
    
        if [ ! "x${f2}" = "-" ] ; then
            f2=`echo "${f2}" | md5sum | head -c-1`
            # might wanna throw in some memoization
        fi
    
        echo "${new_line}\t$f2\t${frest}"
    }
    
    convert_file() {
        for i in `cat $1`; do
            convert_line "${i}" >> $DST/hashed-$1
        done
    }
    
    for i in $SRC/*; do
        convert_file $i
    done
    

    not tested. might need polishing some rough edges.

    0 讨论(0)
提交回复
热议问题