问题
I expected that a apscheduler.executors.pool.ProcessPoolExecutor
with the max_workers
argument set to 1, would not execute more than one job in parallel.
import subprocess
from apscheduler.executors.pool import ProcessPoolExecutor
from apscheduler.schedulers.blocking import BlockingScheduler
def run_job():
subprocess.check_call('echo start; sleep 3; echo done', shell=True)
scheduler = BlockingScheduler(
executors={'processpool': ProcessPoolExecutor(max_workers=1)})
for i in range(20):
scheduler.add_job(run_job)
scheduler.start()
However actually up to ten jobs are executed in parallel.
Do I misunderstand the concept or is this a bug?
回答1:
The reason this isn't working as expected is because you're not specifying which executor you want to run the job in.
Try this instead:
for i in range(20):
scheduler.add_job(run_job, executor='processpool')
来源:https://stackoverflow.com/questions/34214511/why-does-the-processpoolexecutor-ignore-the-max-workers-argument