How to `wget` a list of URLs in a text file?

ぐ巨炮叔叔 提交于 2020-04-07 11:02:57

问题


Let's say I have a text file of hundreds of URLs in one location, e.g.

http://url/file_to_download1.gz
http://url/file_to_download2.gz
http://url/file_to_download3.gz
http://url/file_to_download4.gz
http://url/file_to_download5.gz
....

What is the correct way to download each of these files with wget? I suspect there's a command like wget -flag -flag text_file.txt


回答1:


Quick man wget gives me the following:

[..]

-i file

--input-file=file

Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)

If this function is used, no URLs need be present on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html is not specified, then file should consist of a series of URLs, one per line.

[..]

So: wget -i text_file.txt




回答2:


try:

wget -i text_file.txt

(check man wget)




回答3:


If you also want to preserve the original file name, try with:

wget --content-disposition --trust-server-names -i list_of_urls.txt



回答4:


If you're on OpenWrt or using some old version of wget which doesn't gives you -i option:

#!/bin/bash
input="text_file.txt"
while IFS= read -r line
do
  wget $line
done < "$input"

Furthermore, if you don't have wget, you can use curl or whatever you use for downloading individual files.




回答5:


Run it in parallel with

cat text_file.txt | parallel --gnu "wget {}"


来源:https://stackoverflow.com/questions/40986340/how-to-wget-a-list-of-urls-in-a-text-file

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!