增量式爬虫

一笑奈何 提交于 2019-12-01 09:48:16

增量式爬虫

  • 概念:检测网站数据跟新的情况,爬取更新数据

  • 核心:去重!!!

  • 增量式爬虫

    • 深度爬取类型的网站中需要对详情页的url进行记录和检测

      • 记录:将爬取过的详情页的url进行记录保存

        • url存储到redis的set中
        • redis的sadd方法存取时,如果数据存在返回值为0,如果不存在返回值为1;
      • 检测:如果对某一个详情页的url发起请求之前先要取记录表中进行查看,该url是否存在,存在的话以为
        着这个url已经被爬取过了。

      • 代码示例

        spider.py文件

      import scrapy
      from scrapy.linkextractors import LinkExtractor
      from scrapy.spiders import CrawlSpider, Rule
      from redis import Redis
      from zjs_moviePro.items import ZjsMovieproItem
      class MovieSpider(CrawlSpider):
          name = 'movie'
          conn = Redis(host='127.0.0.1',port=6379)
          # allowed_domains = ['www.xxx.com']
          start_urls = ['https://www.4567tv.tv/index.php/vod/show/id/6.html']
      
          rules = (#/index.php/vod/show/id/6/page/2.html
              Rule(LinkExtractor(allow=r'id/6/page/\d+\.html'), callback='parse_item', follow=False),
          )
      
          def parse_item(self, response):
              li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
              for li in li_list:
                  name = li.xpath('./div/div/h4/a/text()').extract_first()
                  detail_url = 'https://www.4567tv.tv'+li.xpath('./div/div/h4/a/@href').extract_first()
                  ex = self.conn.sadd('movie_detail_urls',detail_url)
                  if ex == 1:#向redis的set中成功插入了detail_url
                      print('有最新数据可爬......')
                      item = ZjsMovieproItem()
                      item['name'] = name
                      yield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item})
                  else:
                      print('该数据已经被爬取过了!')
          def parse_detail(self,response):
              item = response.meta['item']
              desc = response.xpath('/html/body/div[1]/div/div/div/div[2]/p[5]/span[2]/text()').extract_first()
              item['desc'] = desc
      
              yield item
      • item中创建属性

        import scrapy
        class ZjsMovieproItem(scrapy.Item):
            # define the fields for your item here like:
            name = scrapy.Field()
            desc = scrapy.Field()
      • 管道文件中

        class ZjsMovieproPipeline(object):
            def process_item(self, item, spider):
                conn = spider.conn
                conn.lpush('movie_data',item)
                return item
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!