问题
I'm trying to scrape a site whilst taking a screenshot of every page. So far, I have managed to piece together the following code:
import json
import base64
import scrapy
from scrapy_splash import SplashRequest
class ExtractSpider(scrapy.Spider):
name = 'extract'
def start_requests(self):
url = 'https://stackoverflow.com/'
splash_args = {
'html': 1,
'png': 1
}
yield SplashRequest(url, self.parse_result, endpoint='render.json', args=splash_args)
def parse_result(self, response):
png_bytes = base64.b64decode(response.data['png'])
imgdata = base64.b64decode(png_bytes)
filename = 'some_image.png'
with open(filename, 'wb') as f:
f.write(imgdata)
It gets onto the site fine (example, stackoverflow) and returns data for png_bytes, but when written to a file - returns a broken image (doesn't load).
Is there a way to fix this, or alternatively find a more efficient solution? I have read that Splash Lua Scripts can do this, but have been unable to find a way to implement this. Thanks.
回答1:
You are decoding from base64 twice:
png_bytes = base64.b64decode(response.data['png'])
imgdata = base64.b64decode(png_bytes)
Simply do:
def parse_result(self, response):
imgdata = base64.b64decode(response.data['png'])
filename = 'some_image.png'
with open(filename, 'wb') as f:
f.write(imgdata)
来源:https://stackoverflow.com/questions/45172260/scrapy-splash-screenshots