Using wget to fake browser?

百般思念 提交于 2020-04-13 04:33:51

问题


I'd like to crawl a web site to build its sitemap.

Problems is, the site uses an htaccess file to block spiders, so the following command only downloads the homepage (index.html) and stops, although it does contain links to other pages:

wget -mkEpnp -e robots=off -U Mozilla http://www.acme.com

Since I have no problem accessing the rest of the site with a browser, I assume the "-e robots=off -U Mozilla" options aren't enough to have wget pretend it's a browser.

Are there other options I should know about? Does wget handle cookies by itself?

Thank you.

--

Edit: I added those to wget.ini, to no avail:

hsts=0
robots = off
header = Accept-Language: en-us,en;q=0.5
header = Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
header = Connection: keep-alive
user_agent = Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:40.0) Gecko/20100101 Firefox/40.0
referer = /

--

Edit: Found it.

The pages linked to in the homepage were on a remote server, so wget would ignore them. Just add "--span-hosts" to tell wget to go there, and "-D www.remote.site.com" if you want to restrict spidering to that domain.


回答1:


you might want to set the User-Agent to something more than just Mozilla, something like:

wget --user-agent="Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0"


来源:https://stackoverflow.com/questions/43182879/using-wget-to-fake-browser

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!