Make Urllib2 move through pages

只谈情不闲聊 提交于 2020-01-07 03:42:31

问题


I am trying to scrape http://targetstudy.com/school/schools-in-chhattisgarh.html

I am usling lxml.html, urllib2

I want somehow, follow all the pages by clicking the next page link and download its source. And make it stop at the last page. The href for next page is ['?recNo=25']

Could someone please advise how to do that, Thanks in advance.

Here is my code,

    import urllib2
    import lxml.html
    import itertools
    url = "http://targetstudy.com/school/schools-in-chhattisgarh.html"
    req = urllib2.Request(url, headers={ 'User-Agent': 'Mozilla/5.0' })
    stuff = urllib2.urlopen(req).read().encode('ascii', 'ignore')
    tree = lxml.html.fromstring(stuff)
    print stuff

    links = tree.xpath("(//ul[@class='pagination']/li/a)[last()]/@href")
    for link in links:
        req = urllib2.Request(url, headers={ 'User-Agent': 'Mozilla/5.0' })
        stuff = urllib2.urlopen(req).read().encode('ascii', 'ignore')
        tree = lxml.html.fromstring(stuff)
        print stuff
        links = tree.xpath("(//ul[@class='pagination']/li/a)[last()]/@href")

But all its doing is going to the 2nd page and NOT going further.

Please help me


回答1:


I expect all your problems are from overwriting your list at the end of the loop. Assuming the rest of your code works, this might be a better solution.

import urllib2
import lxml.html
import itertools
url = "http://targetstudy.com/school/schools-in-chhattisgarh.html"
req = urllib2.Request(url, headers={ 'User-Agent': 'Mozilla/5.0' })
stuff = urllib2.urlopen(req).read().encode('ascii', 'ignore')
tree = lxml.html.fromstring(stuff)
print stuff

links = [url]
visited = []
while len(links) > 0:
    # take a link out of the list and mark it as visited
    link = links.pop()
    visited.append(link)

    # open the link and read the contents
    req = urllib2.Request(link, headers={ 'User-Agent': 'Mozilla/5.0' })
    stuff = urllib2.urlopen(req).read().encode('ascii', 'ignore')
    tree = lxml.html.fromstring(stuff)
    print stuff

    # for every link in the page
    for new_link in tree.xpath("(//ul[@class='pagination']/li/a)[last()]/@href"):
        # if link has not been visited yet and is not in the list to visit next
        if new_link not in links and new_link not in visited:
            # add the new link to the list of links to visit
            links.append(new_link)


来源:https://stackoverflow.com/questions/23853748/make-urllib2-move-through-pages

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!