Python Scrapy Dynamic Web Sites

后端 未结 2 1696
一个人的身影
一个人的身影 2020-12-06 08:56

I am trying to scrape a very simple web page with the help of Scrapy and it\'s xpath selectors but for some reason the selectors I have do not work in Scrapy but they do wor

相关标签:
2条回答
  • 2020-12-06 09:28

    Scrapy only does a GET request for the url, it is not a web browser and therefore cannot run JavaScript. Because of this Scrapy alone will not be enough to scrape through dynamic web pages.

    In addition you will need something like Selenium which basically gives you an interface to several web browsers and their functionalities, one of them being the ability to run JavaScript and get client side generated HTML.

    Here is a snippet of how one can go about doing this:

    from Project.items import SomeItem
    from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
    from scrapy.contrib.spiders import CrawlSpider, Rule
    from scrapy.selector import Selector
    from selenium import webdriver
    import time
    
    class RandomSpider(CrawlSpider):
    
        name = 'RandomSpider'
        allowed_domains = ['random.com']
        start_urls = [
            'http://www.random.com'
        ]
    
        rules = (
            Rule(SgmlLinkExtractor(allow=('some_regex_here')), callback='parse_item', follow=True),
        )
    
        def __init__(self):
            CrawlSpider.__init__(self)
            # use any browser you wish
            self.browser = webdriver.Firefox() 
    
        def __del__(self):
            self.browser.close()
    
        def parse_item(self, response):
            item = SomeItem()
            self.browser.get(response.url)
            # let JavaScript Load
            time.sleep(3) 
    
            # scrape dynamically generated HTML
            hxs = Selector(text=self.browser.page_source) 
            item['some_field'] = hxs.select('some_xpath')
            return item
    
    0 讨论(0)
  • 2020-12-06 09:31

    I think I found the webpage you want to extract from, and the chapters are loaded after fetching some JSON data, based on a "mangaid" (that is available in a Javascript Array in the page.

    So fetching the chapters is a matter of making a specific GET request to a specific /actions/selector/ endpoint. It's basically emulating what your browser's Javascript engine is doing.

    You probably get better performance using this technique than Selenium, but it does involve (minor) Javascript parsing (no real interpretation needed).

    0 讨论(0)
提交回复
热议问题