Python's urllib2 doesn't work on some sites

☆樱花仙子☆ 提交于 2019-12-01 09:45:35

问题


I found that you can't read from some sites using Python's urllib2(or urllib). An example...

urllib2.urlopen("http://www.dafont.com/").read()
# Returns ''

These sites work when you visit the site with a browser. I can even scrape them using PHP(didn't try other languages). I have seen other sites with the same issue - but can't remember the URL at the moment.

My questions are...

  1. What is the cause of this issue?
  2. Any workarounds?

回答1:


I believe it gets blocked by the User-Agent. You can change User-Agent using the following sample code:

USERAGENT = 'something'
HEADERS = {'User-Agent': USERAGENT}

req = urllib2.Request(URL_HERE, headers=HEADERS)
f = urllib2.urlopen(req)
s = f.read()
f.close()



回答2:


Try setting a different user agent. Check the answers in this link.




回答3:


I'm the guy who posted the question. I have some suspicions - but not sure about them - that's why I posted the question here.

What is the cause of this issue?

I think its due to the host blocking the urllib library using robot.txt or htaccess. But not sure about it. Not even sure if its possible.

Any workaround for this issue?

If you are in Unix, this will work...

contents = commands.getoutput("curl -s '"+url+"'")


来源:https://stackoverflow.com/questions/2572266/pythons-urllib2-doesnt-work-on-some-sites

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!