How to `wget` a list of URLs in a text file?

后端 未结 5 1200
余生分开走
余生分开走 2021-01-30 03:46

Let\'s say I have a text file of hundreds of URLs in one location, e.g.

http://url/file_to_download1.gz
http://url/file_to_download2.gz
http://url/file_to_downlo         


        
相关标签:
5条回答
  • 2021-01-30 04:20

    Quick man wget gives me the following:

    [..]

    -i file

    --input-file=file

    Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)

    If this function is used, no URLs need be present on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html is not specified, then file should consist of a series of URLs, one per line.

    [..]

    So: wget -i text_file.txt

    0 讨论(0)
  • 2021-01-30 04:21

    If you're on OpenWrt or using some old version of wget which doesn't gives you -i option:

    #!/bin/bash
    input="text_file.txt"
    while IFS= read -r line
    do
      wget $line
    done < "$input"
    

    Furthermore, if you don't have wget, you can use curl or whatever you use for downloading individual files.

    0 讨论(0)
  • 2021-01-30 04:25

    try:

    wget -i text_file.txt
    

    (check man wget)

    0 讨论(0)
  • 2021-01-30 04:30

    If you also want to preserve the original file name, try with:

    wget --content-disposition --trust-server-names -i list_of_urls.txt
    
    0 讨论(0)
  • 2021-01-30 04:34

    Run it in parallel with

    cat text_file.txt | parallel --gnu "wget {}"
    
    0 讨论(0)
提交回复
热议问题