wget or curl from stdin

后端 未结 5 1611
青春惊慌失措
青春惊慌失措 2020-12-30 03:20

I\'d like to download a web pages while supplying URLs from stdin. Essentially one process continuously produces URLs to stdout/file and I want to pipe them to wget or curl.

相关标签:
5条回答
  • 2020-12-30 04:09

    What you need to use is xargs. E.g.

    tail -f 1.log | xargs -n1 wget -O - -q
    
    0 讨论(0)
  • 2020-12-30 04:16

    You can do this with cURL, but your input needs to be properly formatted. Example alfa.txt:

    url example.com
    output example.htm
    url stackoverflow.com
    output stackoverflow.htm
    

    Alternate example:

    url stackoverflow.com/questions
    remote-name
    url stackoverflow.com/documentation
    remote-name
    

    Example command:

    cat alfa.txt | curl -K-
    
    0 讨论(0)
  • 2020-12-30 04:16

    Use xargs which converts stdin to argument.

    tail 1.log | xargs -L 1 wget
    
    0 讨论(0)
  • 2020-12-30 04:18

    Try piping the tail -f through python -c $'import pycurl;c=pycurl.Curl()\nwhile True: c.setopt(pycurl.URL,raw_input().strip()),c.perform()'

    This gets curl (well, you probably meant the command-line curl and I'm calling it as a library from a Python one-liner, but it's still curl) to fetch each URL immediately, while still taking advantage of keeping the socket to the server open if you're requesting multiple URLs from the same server in sequence. It's not completely robust though: if one of your URLs is duff, the whole command will fail (you might want to make it a proper Python script and add try / except to handle this), and there's also the small detail that it will throw EOFError on EOF (but I'm assuming that's not important if you're using tail -f).

    0 讨论(0)
  • 2020-12-30 04:19

    The effection way is to avoid using xargs, if download files in same web server.

    wget -q -N -i - << EOF
    http://sitename/dir1/file1
    http://sitename/dir2/file2
    http://sitename/dir3/file3
    EOF
    
    0 讨论(0)
提交回复
热议问题