Let\'s say I have a text file of hundreds of URLs in one location, e.g.
http://url/file_to_download1.gz
http://url/file_to_download2.gz
http://url/file_to_downlo
Quick man wget
gives me the following:
[..]
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)
If this function is used, no URLs need be present on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html is not specified, then file should consist of a series of URLs, one per line.
[..]
So: wget -i text_file.txt
If you're on OpenWrt or using some old version of wget which doesn't gives you -i
option:
#!/bin/bash
input="text_file.txt"
while IFS= read -r line
do
wget $line
done < "$input"
Furthermore, if you don't have wget
, you can use curl
or whatever you use for downloading individual files.
try:
wget -i text_file.txt
(check man wget)
If you also want to preserve the original file name, try with:
wget --content-disposition --trust-server-names -i list_of_urls.txt
Run it in parallel with
cat text_file.txt | parallel --gnu "wget {}"