Scrapyd-Deploy: Errors due to using os path to set directory

蓝咒 提交于 2020-06-28 05:26:05

问题


I am trying to deploy a scrapy project via scrapyd-deploy to a remote scrapyd server. The project itself is functional and works perfectly on my local machine and on the remote server when I deploy it via git push prod to the remote server.

With scrapyd-deploy I get this error:

% scrapyd-deploy example -p apo

{ "node_name": "spider1",
  "status": "error",
  "message": "/usr/local/lib/python3.8/dist-packages/scrapy/utils/project.py:90: ScrapyDeprecationWarning: Use of environment variables prefixed with SCRAPY_ to override settings is deprecated. The following environment variables are currently defined: EGG_VERSION\n  warnings.warn(\nTraceback (most recent call last):\n  File \"/usr/lib/python3.8/runpy.py\", line 193, in _run_module_as_main\n    return _run_code(code, main_globals, None,\n  File \"/usr/lib/python3.8/runpy.py\", line 86, in _run_code\n    exec(code, run_globals)\n  File \"/usr/local/lib/python3.8/dist-packages/scrapyd/runner.py\", line 40, in <module>\n    main()\n  File \"/usr/local/lib/python3.8/dist-packages/scrapyd/runner.py\", line 37, in main\n    execute()\n  File \"/usr/local/lib/python3.8/dist-packages/scrapy/cmdline.py\", line 142, in execute\n    cmd.crawler_process = CrawlerProcess(settings)\n  File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 280, in __init__\n    super(CrawlerProcess, self).__init__(settings)\n  File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 152, in __init__\n    self.spider_loader = self._get_spider_loader(settings)\n  File \"/usr/local/lib/python3.8/dist-packages/scrapy/crawler.py\", line 146, in _get_spider_loader\n    return loader_cls.from_settings(settings.frozencopy())\n  File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 60, in from_settings\n    return cls(settings)\n  File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 24, in __init__\n    self._load_all_spiders()\n  File \"/usr/local/lib/python3.8/dist-packages/scrapy/spiderloader.py\", line 46, in _load_all_spiders\n    for module in walk_modules(name):\n  File \"/usr/local/lib/python3.8/dist-packages/scrapy/utils/misc.py\", line 77, in walk_modules\n    submod = import_module(fullpath)\n  File \"/usr/lib/python3.8/importlib/__init__.py\", 
line 127, in import_module\n    return _bootstrap._gcd_import(name[level:], package, level)\n  File \"<frozen importlib._bootstrap>\", 
line 1014, in _gcd_import\n  File \"<frozen importlib._bootstrap>\", 
line 991, in _find_and_load\n  File \"<frozen importlib._bootstrap>\", 
line 975, in _find_and_load_unlocked\n  File \"<frozen importlib._bootstrap>\", 
line 655, in _load_unlocked\n  File \"<frozen importlib._bootstrap>\", 
line 618, in _load_backward_compatible\n  File \"<frozen zipimport>\", 
line 259, in load_module\n  File \"/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/spiders/allaboutwatches.py\", 
line 31, in <module>\n  File \"/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/spiders/allaboutwatches.py\", 
line 36, in GetbidSpider\n  File \"/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/act_functions.py\", 
line 10, in create_image_dir\nNotADirectoryError: [Errno 20] 
Not a directory: '/tmp/apo-v1.0.2-114-g8a2f218-master-5kgzxesk.egg/bid/../images/allaboutwatches'\n"}

My guess is that it has to do something with this method I am calling, since part of the error disapears once I outcomment the method call:

# function will create a custom name directory to hold the images of each crawl
def create_image_dir(name):
    project_dir = os.path.dirname(__file__)+'/../' #<-- absolute dir the script is in
    img_dir     = project_dir+"images/"+name
    if not os.path.exists(img_dir):
        os.mkdir(img_dir);
    custom_settings = {
        'IMAGES_STORE': img_dir ,
    }
    return custom_settings

Same goes for this method:

def brandnames():
    brands = dict()

    script_dir = os.path.dirname(__file__) #<-- absolute dir the script is in
    rel_path = "imports/brand_names.csv"
    abs_file_path = os.path.join(script_dir, rel_path)

    with open(abs_file_path, newline='') as csvfile:
        reader = csv.DictReader(csvfile, delimiter=';', quotechar='"')
        for row in reader:
            brands[row['name'].lower()] = row['name']
    return brands

How can I change the method or the deploy config and keep my functionality as it is?


回答1:


  • os.mkdir might be failing because it cannot create nested directories. You can use os.makedirs(img_dir, exist_ok=True) instead.
  • os.path.dirname(__file__) will point to /tmp/... under scrapyd. Not sure if this is what you want. I would use an absolute path without calling os.path.dirname(__file__) if you don't want images to be downloaded under /tmp.


来源:https://stackoverflow.com/questions/61620407/scrapyd-deploy-errors-due-to-using-os-path-to-set-directory

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!