wget reject still downloads file

前端 未结 1 1694
隐瞒了意图╮
隐瞒了意图╮ 2021-02-13 00:30

I only want the folder structure, but I couldn\'t figure out how with wget. Instead I am using this:

wget -R pdf,css,gif,txt,png -np -r http://example.com

1条回答
  •  名媛妹妹
    2021-02-13 01:11

    That appears to be how wget was designed to work. When performing recursive downloads, non-leaf files that match the reject list are still downloaded so they can be harvested for links, then deleted.

    From the in-code comments (recur.c):

    Either --delete-after was specified, or we loaded this otherwise rejected (e.g. by -R) HTML file just so we could harvest its hyperlinks -- in either case, delete the local file.

    We've had a run-in with this in a past project where we had to mirror an authenticated site and wget keeps hitting the logout pages even when it was meant to reject those URLs. We could not find any options to change the behaviour of wget.

    The solution we ended up with was to download, hack and build our own version of wget. There's probably a more elegant approach to this, but the quick fix we used was to add the following rules to the end of the download_child_p() routine (modified to match your requirements):

      /* Extra rules */
      if (match_tail(url, ".pdf", 0)) goto out;
      if (match_tail(url, ".css", 0)) goto out;
      if (match_tail(url, ".gif", 0)) goto out;
      if (match_tail(url, ".txt", 0)) goto out;
      if (match_tail(url, ".png", 0)) goto out;
      /* --- end extra rules --- */
    
      /* The URL has passed all the tests.  It can be placed in the
         download queue. */
      DEBUGP (("Decided to load it.\n"));
    
      return 1;
    
     out:
      DEBUGP (("Decided NOT to load it.\n"));
    
      return 0;
    }
    

    0 讨论(0)
提交回复
热议问题