Multiple simultaneous downloads using Wget?

前端 未结 16 1157
旧时难觅i
旧时难觅i 2020-12-04 04:50

I\'m using wget to download website content, but wget downloads the files one by one.

How can I make wget download using 4 simultaneous connections?

相关标签:
16条回答
  • 2020-12-04 05:14

    A new (but yet not released) tool is Mget. It has already many options known from Wget and comes with a library that allows you to easily embed (recursive) downloading into your own application.

    To answer your question:

    mget --num-threads=4 [url]

    UPDATE

    Mget is now developed as Wget2 with many bugs fixed and more features (e.g. HTTP/2 support).

    --num-threads is now --max-threads.

    0 讨论(0)
  • 2020-12-04 05:14

    use xargs to make wget working in multiple file in parallel

    #!/bin/bash
    
    mywget()
    {
        wget "$1"
    }
    
    export -f mywget
    
    # run wget in parallel using 8 thread/connection
    xargs -P 8 -n 1 -I {} bash -c "mywget '{}'" < list_urls.txt
    

    Aria2 options, The right way working with file smaller than 20mb

    aria2c -k 2M -x 10 -s 10 [url]
    

    -k 2M split file into 2mb chunk

    -k or --min-split-size has default value of 20mb, if you not set this option and file under 20mb it will only run in single connection no matter what value of -x or -s

    0 讨论(0)
  • 2020-12-04 05:17

    use

    aria2c -x 10 -i websites.txt >/dev/null 2>/dev/null &
    

    in websites.txt put 1 url per line, example:

    https://www.example.com/1.mp4
    https://www.example.com/2.mp4
    https://www.example.com/3.mp4
    https://www.example.com/4.mp4
    https://www.example.com/5.mp4
    
    0 讨论(0)
  • 2020-12-04 05:19

    wget cant download in multiple connections, instead you can try to user other program like aria2.

    0 讨论(0)
  • 2020-12-04 05:20

    Since GNU parallel was not mentioned yet, let me give another way:

    cat url.list | parallel -j 8 wget -O {#}.html {}
    
    0 讨论(0)
  • 2020-12-04 05:20

    Consider using Regular Expressions or FTP Globbing. By that you could start wget multiple times with different groups of filename starting characters depending on their frequency of occurrence.

    This is for example how I sync a folder between two NAS:

    wget --recursive --level 0 --no-host-directories --cut-dirs=2 --no-verbose --timestamping --backups=0 --bind-address=10.0.0.10 --user=<ftp_user> --password=<ftp_password> "ftp://10.0.0.100/foo/bar/[0-9a-hA-H]*" --directory-prefix=/volume1/foo &
    wget --recursive --level 0 --no-host-directories --cut-dirs=2 --no-verbose --timestamping --backups=0 --bind-address=10.0.0.11 --user=<ftp_user> --password=<ftp_password> "ftp://10.0.0.100/foo/bar/[!0-9a-hA-H]*" --directory-prefix=/volume1/foo &
    

    The first wget syncs all files/folders starting with 0, 1, 2... F, G, H and the second thread syncs everything else.

    This was the easiest way to sync between a NAS with one 10G ethernet port (10.0.0.100) and a NAS with two 1G ethernet ports (10.0.0.10 and 10.0.0.11). I bound the two wget threads through --bind-address to the different ethernet ports and called them parallel by putting & at the end of each line. By that I was able to copy huge files with 2x 100 MB/s = 200 MB/s in total.

    0 讨论(0)
提交回复
热议问题