How to access scrapy settings from item Pipeline

后端 未结 4 1198
攒了一身酷
攒了一身酷 2020-12-07 20:48

How do I access the scrapy settings in settings.py from the item pipeline. The documentation mentions it can be accessed through the crawler in extensions, but I don\'t see

相关标签:
4条回答
  • 2020-12-07 21:32

    UPDATE (2020-10-28)
    Please note that this answer is now ~ 7years old, so it's validity can no longer be ensured.

    The way to access your Scrapy settings (as defined in settings.py) from within your_spider.py is simple. All other answers are way too complicated. The reason for this is the very poor maintenance of the Scrapy documentation, combined with many recent updates & changes. Neither in the "Settings" documentation "How to access settings", nor in the "Settings API" have they bothered giving any workable example. Here's an example, how to get your current USER_AGENT string.

    Just add the following lines to your_spider.py:

    # To get your settings from (settings.py):
    from scrapy.utils.project import get_project_settings
    ...
    class YourSpider(BaseSpider):
        ...
        def parse(self, response):
            ...
            settings = get_project_settings()
            print "Your USER_AGENT is:\n%s" % (settings.get('USER_AGENT'))
            ...
    

    As you can see, there's no need to use @classmethod or re-define the from_crawler() or __init__() functions. Hope this helps.

    PS. I'm still not sure why using from scrapy.settings import Settings doesn't work the same way, since it would be the more obvious choice of import?

    0 讨论(0)
  • 2020-12-07 21:32

    Ok, so the documentation at http://doc.scrapy.org/en/latest/topics/extensions.html says that

    The main entry point for a Scrapy extension (this also includes middlewares and pipelines) is the from_crawler class method which receives a Crawler instance which is the main object controlling the Scrapy crawler. Through that object you can access settings, signals, stats, and also control the crawler behaviour, if your extension needs to such thing.

    So then you can have a function to get the settings.

    @classmethod
    def from_crawler(cls, crawler):
        settings = crawler.settings
        my_setting = settings.get("MY_SETTING")
        return cls(my_setting)
    

    The crawler engine then calls the pipeline's init function with my_setting, like so:

    def __init__(self, my_setting):
        self.my_setting = my_setting
    

    And other functions can access it with self.my_setting, as expected.

    Alternatively, in the from_crawler() function you can pass the crawler.settings object to __init__(), and then access settings from the pipeline as needed instead of pulling them all out in the constructor.

    0 讨论(0)
  • 2020-12-07 21:33

    the project structure is quite flat, why not:

    # pipeline.py
    from myproject import settings
    
    0 讨论(0)
  • 2020-12-07 21:35

    The correct answer is: it depends where in the pipeline you wish to access the settings.

    avaleske has answered as if you wanted access to the settings outside of your pipelines process_item method but it's very likely this is where you'll want the setting and therefore there is a much easier way as the Spider instance itself gets passed in as an argument.

    class PipelineX(object):
    
        def process_item(self, item, spider):
             wanted_setting = spider.settings.get('WANTED_SETTING')
    
    0 讨论(0)
提交回复
热议问题