We've been using scrapy-splash
middleware to pass the scraped HTML source through the Splash
javascript engine running inside a docker container.
If we want to use Splash in the spider, we configure several required project settings and yield a Request
specifying specific meta
arguments:
yield Request(url, self.parse_result, meta={
'splash': {
'args': {
# set rendering arguments here
'html': 1,
'png': 1,
# 'url' is prefilled from request url
},
# optional parameters
'endpoint': 'render.json', # optional; default is render.json
'splash_url': '<url>', # overrides SPLASH_URL
'slot_policy': scrapyjs.SlotPolicy.PER_DOMAIN,
}
})
This works as documented. But, how can we use scrapy-splash
inside the Scrapy Shell?
just wrap the url you want to shell to in splash http api.
So you would want something like:
scrapy shell 'http://localhost:8050/render.html?url=http://domain.com/page-with-javascript.html&timeout=10&wait=0.5'
where localhost:port
is where your splash service is runningurl
is url you want to crawl and dont forget to urlquote it!render.html
is one of the possible http api endpoints, returns redered html page in this casetimeout
time in seconds for timeoutwait
time in seconds to wait for javascript to execute before reading/saving the html.
You can run scrapy shell
without arguments inside a configured Scrapy project, then create req = scrapy_splash.SplashRequest(url, ...)
and call fetch(req)
.
For the windows users, who use Docker Toolbox:
Change the single inverted comma with double inverted comma for preventing the
invalid hostname:http
error.change the localhost to the docker ip address which is below the whale logo. for me it was
192.168.99.100
.
Finally i got this:
scrapy shell "http://192.168.99.100:8050/render.html?url="https://samplewebsite.com/category/banking-insurance-financial-services/""
来源:https://stackoverflow.com/questions/35352423/scrapy-shell-and-scrapy-splash