How to combine scrapy and htmlunit to crawl urls with javascript

后端 未结 2 525
抹茶落季
抹茶落季 2020-12-02 13:03

I\'m working on Scrapy to crawl pages,however,I can\'t handle the pages with javascript. People suggest me to use htmlunit, so I got it installed,but I don\'t know how to us

相关标签:
2条回答
  • 2020-12-02 13:12

    Here is a working example using selenium and phantomjs headless webdriver in a download handler middleware.

    class JsDownload(object):
    
    @check_spider_middleware
    def process_request(self, request, spider):
        driver = webdriver.PhantomJS(executable_path='D:\phantomjs.exe')
        driver.get(request.url)
        return HtmlResponse(request.url, encoding='utf-8', body=driver.page_source.encode('utf-8'))
    

    I wanted to ability to tell different spiders which middleware to use so I implemented this wrapper:

    def check_spider_middleware(method):
    @functools.wraps(method)
    def wrapper(self, request, spider):
        msg = '%%s %s middleware step' % (self.__class__.__name__,)
        if self.__class__ in spider.middleware:
            spider.log(msg % 'executing', level=log.DEBUG)
            return method(self, request, spider)
        else:
            spider.log(msg % 'skipping', level=log.DEBUG)
            return None
    
    return wrapper
    

    settings.py:

    DOWNLOADER_MIDDLEWARES = {'MyProj.middleware.MiddleWareModule.MiddleWareClass': 500}
    

    for wrapper to work all spiders must have at minimum:

    middleware = set([])
    

    to include a middleware:

    middleware = set([MyProj.middleware.ModuleName.ClassName])
    

    The main advantage to implementing it this way rather than in the spider is that you only end up making one request. In the solution at reclosedev's second link for example: The download handler processes the request and then hands off the response to the spider. The spider then makes a brand new request in it's parse_page function -- That's two requests for the same content.

    Another example: https://github.com/scrapinghub/scrapyjs

    Cheers!

    0 讨论(0)
  • 2020-12-02 13:15

    To handle the pages with javascript you can use Webkit or Selenium.

    Here some snippets from snippets.scrapy.org:

    Rendered/interactive javascript with gtk/webkit/jswebkit

    Rendered Javascript Crawler With Scrapy and Selenium RC

    0 讨论(0)
提交回复
热议问题