Python urllib2.open Connection reset by peer error

后端 未结 2 1369
独厮守ぢ
独厮守ぢ 2021-01-20 15:30

I\'m trying to scrape a page using python

The problem is, I keep getting Errno54 Connection reset by peer.

The error comes when I run this code -

相关标签:
2条回答
  • 2021-01-20 15:43
    $> telnet www.bkstr.com 80
    Trying 64.37.224.85...
    Connected to www.bkstr.com.
    Escape character is '^]'.
    GET /webapp/wcs/stores/servlet/CourseMaterialsResultsView?catalogId=10001&categoryId=9604&storeId=10161&langId=-1&programId=562&termId=100020629&divisionDisplayName=Stanford&departmentDisplayName=ILAC&courseDisplayName=126&sectionDisplayName=01&demoKey=d&purpose=browse HTTP/1.0
    
    Connection closed by foreign host.
    

    You're not going to have any joy fetching that URL from python, or anywhere else. If it works in your browser then there must be something else going on, like cookies or authentication or some such. Or, possibly, the server's broken or they've changed their configuration.

    Try opening it in a browser that you've never accessed that site in before to check. Then log in and try it again.

    Edit: It was cookies after all:

    import cookielib, urllib2
    
    cj = cookielib.CookieJar()
    opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
    #Need to set a cookie
    opener.open("http://www.bkstr.com/")
    #Now open the page we want
    data = opener.open("http://www.bkstr.com/webapp/wcs/stores/servlet/CourseMaterialsResultsView?catalogId=10001&categoryId=9604&storeId=10161&langId=-1&programId=562&termId=100020629&divisionDisplayName=Stanford&departmentDisplayName=ILAC&courseDisplayName=126&sectionDisplayName=01&demoKey=d&purpose=browse").read()
    

    The output looks ok, but you'll have to check that it does what you want :)

    0 讨论(0)
  • 2021-01-20 15:52

    I came across a similar error just recently. The connection was dropping out and being reset. I tried cookiejars, extended delays and different headers/useragents, but nothing worked. In the end the fix was simple. I went from urllib2 to requests. The old;

    import urllib2
    opener = urllib2.build_opener()
    buf = opener.open(url).read()
    

    The new;

    import requests
    buf = requests.get(url).text
    

    After that everything worked perfectly.

    0 讨论(0)
提交回复
热议问题