Script to get the HTTP status code of a list of urls?

前端 未结 8 845
无人及你
无人及你 2020-11-30 17:17

I have a list of URLS that I need to check, to see if they still work or not. I would like to write a bash script that does that for me.

I only need the returned HTT

相关标签:
8条回答
  • 2020-11-30 17:43

    Use curl to fetch the HTTP-header only (not the whole file) and parse it:

    $ curl -I  --stderr /dev/null http://www.google.co.uk/index.html | head -1 | cut -d' ' -f2
    200
    
    0 讨论(0)
  • 2020-11-30 17:43

    Due to https://mywiki.wooledge.org/BashPitfalls#Non-atomic_writes_with_xargs_-P (output from parallel jobs in xargs risks being mixed), I would use GNU Parallel instead of xargs to parallelize:

    cat url.lst |
      parallel -P0 -q curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' > outfile
    

    In this particular case it may be safe to use xargs because the output is so short, so the problem with using xargs is rather that if someone later changes the code to do something bigger, it will no longer be safe. Or if someone reads this question and thinks he can replace curl with something else, then that may also not be safe.

    0 讨论(0)
  • 2020-11-30 17:49

    Curl has a specific option, --write-out, for this:

    $ curl -o /dev/null --silent --head --write-out '%{http_code}\n' <url>
    200
    
    • -o /dev/null throws away the usual output
    • --silent throws away the progress meter
    • --head makes a HEAD HTTP request, instead of GET
    • --write-out '%{http_code}\n' prints the required status code

    To wrap this up in a complete Bash script:

    #!/bin/bash
    while read LINE; do
      curl -o /dev/null --silent --head --write-out "%{http_code} $LINE\n" "$LINE"
    done < url-list.txt
    

    (Eagle-eyed readers will notice that this uses one curl process per URL, which imposes fork and TCP connection penalties. It would be faster if multiple URLs were combined in a single curl, but there isn't space to write out the monsterous repetition of options that curl requires to do this.)

    0 讨论(0)
  • 2020-11-30 17:49

    I found a tool "webchk” written in Python. Returns a status code for a list of urls. https://pypi.org/project/webchk/

    Output looks like this:

    ▶ webchk -i ./dxieu.txt | grep '200'
    http://salesforce-case-status.dxi.eu/login ... 200 OK (0.108)
    https://support.dxi.eu/hc/en-gb ... 200 OK (0.389)
    https://support.dxi.eu/hc/en-gb ... 200 OK (0.401)
    

    Hope that helps!

    0 讨论(0)
  • 2020-11-30 17:52

    wget -S -i *file* will get you the headers from each url in a file.

    Filter though grep for the status code specifically.

    0 讨论(0)
  • 2020-11-30 17:58

    This relies on widely available wget, present almost everywhere, even on Alpine Linux.

    wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
    

    The explanations are as follow :

    --quiet

    Turn off Wget's output.

    Source - wget man pages

    --spider

    [ ... ] it will not download the pages, just check that they are there. [ ... ]

    Source - wget man pages

    --server-response

    Print the headers sent by HTTP servers and responses sent by FTP servers.

    Source - wget man pages

    What they don't say about --server-response is that those headers output are printed to standard error (sterr), thus the need to redirect to stdin.

    The output sent to standard input, we can pipe it to awk to extract the HTTP status code. That code is :

    • the second ($2) non-blank group of characters: {$2}
    • on the very first line of the header: NR==1

    And because we want to print it... {print $2}.

    wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
    
    0 讨论(0)
提交回复
热议问题