Why does the ProcessPoolExecutor ignore the max_workers argument?

给你一囗甜甜゛ 提交于 2020-08-10 08:39:04

问题


I expected that a apscheduler.executors.pool.ProcessPoolExecutor with the max_workers argument set to 1, would not execute more than one job in parallel.

import subprocess

from apscheduler.executors.pool import ProcessPoolExecutor
from apscheduler.schedulers.blocking import BlockingScheduler


def run_job():
    subprocess.check_call('echo start; sleep 3; echo done', shell=True)

scheduler = BlockingScheduler(
        executors={'processpool': ProcessPoolExecutor(max_workers=1)})

for i in range(20):
    scheduler.add_job(run_job)
scheduler.start()                                

However actually up to ten jobs are executed in parallel.

Do I misunderstand the concept or is this a bug?


回答1:


The reason this isn't working as expected is because you're not specifying which executor you want to run the job in.

Try this instead:

for i in range(20):
    scheduler.add_job(run_job, executor='processpool')


来源:https://stackoverflow.com/questions/34214511/why-does-the-processpoolexecutor-ignore-the-max-workers-argument

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!