Get scrapy spider to crawl entire site

后端 未结 1 1469
感情败类
感情败类 2021-01-31 21:47

I am using scrapy to crawl old sites that I own, I am using the code below as my spider. I don\'t mind having files outputted for each webpage, or a database with all the conten

相关标签:
1条回答
  • 2021-01-31 22:42

    To crawl whole site you should use the CrawlSpider instead of the scrapy.Spider

    Here's an example

    For your purposes try using something like this:

    import scrapy
    from scrapy.spiders import CrawlSpider, Rule
    from scrapy.linkextractors import LinkExtractor
    
    class MySpider(CrawlSpider):
        name = 'example.com'
        allowed_domains = ['example.com']
        start_urls = ['http://www.example.com']
    
        rules = (
            Rule(LinkExtractor(), callback='parse_item', follow=True),
        )
    
        def parse_item(self, response):
            filename = response.url.split("/")[-2] + '.html'
            with open(filename, 'wb') as f:
                f.write(response.body)
    

    Also, take a look at this article

    0 讨论(0)
提交回复
热议问题