问题
How can I get the request url in Scrapy's parse()
function? I have a lot of urls in start_urls
and some of them redirect my spider to homepage and as result I have an empty item. So I need something like item['start_url'] = request.url
to store these urls. I'm using the BaseSpider.
回答1:
The 'response' variable that's passed to parse() has the info you want. You shouldn't need to override anything.
eg. (EDITED)
def parse(self, response):
print "URL: " + response.request.url
回答2:
The request object is accessible from the response object, therefore you can do the following:
def parse(self, response):
item['start_url'] = response.request.url
回答3:
You need to override BaseSpider's make_requests_from_url(url)
function to assign the start_url to the item and then use the Request.meta
special keys to pass that item to the parse
function
from scrapy.http import Request
# override method
def make_requests_from_url(self, url):
item = MyItem()
# assign url
item['start_url'] = url
request = Request(url, dont_filter=True)
# set the meta['item'] to use the item in the next call back
request.meta['item'] = item
return request
def parse(self, response):
# access and do something with the item in parse
item = response.meta['item']
item['other_url'] = response.url
return item
Hope that helps.
回答4:
Instead of storing requested URL's somewhere and also scrapy processed URL's are not in same sequence as provided in start_urls
.
By using below,
response.request.meta['redirect_urls']
will give you the list of redirect happened like ['http://requested_url','https://redirected_url','https://final_redirected_url']
To access first URL from above list, you can use
response.request.meta['redirect_urls'][0]
For more, see doc.scrapy.org mentioned as :
RedirectMiddleware
This middleware handles redirection of requests based on response status.
The urls which the request goes through (while being redirected) can be found in the redirect_urls
Request.meta key.
Hope this helps you
回答5:
Python 3.5
Scrapy 1.5.0
from scrapy.http import Request
# override method
def start_requests(self):
for url in self.start_urls:
item = {'start_url': url}
request = Request(url, dont_filter=True)
# set the meta['item'] to use the item in the next call back
request.meta['item'] = item
yield request
# use meta variable
def parse(self, response):
url = response.meta['item']['start_url']
来源:https://stackoverflow.com/questions/20081024/scrapy-get-request-url-in-parse