Scrapy, scrape pages from second set of links

后端 未结 1 1443
深忆病人
深忆病人 2020-12-21 11:37

I\'ve been going through the Scrapy documentation today and trying to get a working version of - https://docs.scrapy.org/en/latest/intro/tutorial.html#our-first-spider - on

相关标签:
1条回答
  • 2020-12-21 12:15

    Let's start with the logic:

    1. Scrape homepage - fetch all cities
    2. Scrape city page - fetch all unit urls
    3. Scrape unit page - get all desired data

    I've made an example of how you could implement this in a scrapy spider below. I was not able to find all the info you mention in your example code, but I hope the code is clear enough for you to understand what it does and how to add the info you need.

    import scrapy
    
    
    class QuotesSpider(scrapy.Spider):
        name = "quotes"
        start_urls = [
            'http://www.unitestudents.com/',
                ]
    
        # Step 1
        def parse(self, response):
            for city in response.xpath('//select[@id="frm_homeSelect_city"]/option[not(contains(text(),"Select your city"))]/text()').extract(): # Select all cities listed in the select (exclude the "Select your city" option)
                yield scrapy.Request(response.urljoin("/"+city), callback=self.parse_citypage)
    
        # Step 2
        def parse_citypage(self, response):
            for url in response.xpath('//div[@class="property-header"]/h3/span/a/@href').extract(): #Select for each property the url
                yield scrapy.Request(response.urljoin(url), callback=self.parse_unitpage)
    
            # I could not find any pagination. Otherwise it would go here.
    
        # Step 3
        def parse_unitpage(self, response):
            unitTypes = response.xpath('//div[@class="room-type-block"]/h5/text()').extract() + response.xpath('//h4[@class="content__header"]/text()').extract()
            for unitType in unitTypes: # There can be multiple unit types so we yield an item for each unit type we can find.
                yield {
                    'name': response.xpath('//h1/span/text()').extract_first(),
                    'type': unitType,
                    # 'price': response.xpath('XPATH GOES HERE'), # Could not find a price on the page
                    # 'distance_beds': response.xpath('XPATH GOES HERE') # Could not find such info
                }
    

    I think the code is pretty clean and simple. Comments should clarify why I chose to use the for loops. If something is not clear, let me know and I'll try to explain it.

    0 讨论(0)
提交回复
热议问题