I have a django application running with uwsgi (with 10 workers) + ngnix. I am using apscheduler for scheduling purpose. Whenever i schedule a job it is being executed multiple times. From these answers ans1, ans2 i got to know this is because the scheduler is started in each worker of uwsgi. I did conditional initializing of the scheduler by binding it to a socket as suggested in this answer and also by keeping a status in the db, so that only one instance of scheduler will be started, but still the same problem exists and also sometimes when creating a job the scheduler is found not running and the job keeps pending and not executed.
I am initializing apscheduler in urls of the django application with following code. This will start the scheduler when application starts.
def job_listener(ev):
print('event',ev)
job_defaults = {
'coalesce': True,
'max_instances': 1
}
scheduler = BackgroundScheduler(job_defaults=job_defaults, timezone=TIME_ZONE, daemon=False)
scheduler.add_jobstore(MongoDBJobStore(client=client), 'default')
scheduler.add_executor(ThreadPoolExecutor(), 'default')
scheduler.add_executor(ProcessPoolExecutor(),'processpool')
scheduler.add_listener(job_listener)
def initialize_scheduler():
try:
if scheduler_db_conn.find_one():
print('scheduler already running')
return True
scheduler.start()
scheduler_db_conn.save({'status': True})
print('---------------scheduler started --------------->')
return True
except:
return False
I use following code to create the job.
from scheduler_conf import scheduler
def create_job(arg_list):
try:
print('scheduler status-->',scheduler.running)
job = scheduler.add_job(**arg_list)
return True
except:
print('error in creating Job')
return False
I am not able to configure and run the scheduler properly. I have referred all the threads in apschedule but still hasn't got a solution.
- If i don't limit from having multiple schedulers running in each worker the job is executed multiple times.
- But if i limit to only one scheduler running inside a worker,some jobs keep pending and not execute.
Whats the solution for this?
Let's consider the following facts:
(1) UWSGI, by default, pre-loads your Django App into the UWSGI Master process' memory BEFORE forking its workers.
(2) UWSGI "forks" workers from the master, meaning they are essentially copied into the memory of each worker. Because of how fork()
is implemented, a Child process (i.e. a worker) does not inherit the threads of a Parent.
(3) When you call BackgroundScheduler.start()
, a thread is created which is responsible for executing jobs on whatever worker/master calls this function.
All you must do, is call BackgroundScheduler.start()
on the Master process, before any workers are created. By doing so, when the workers are created, they WILL NOT INHERIT the BackgroundScheduler thread (#2 above), and thus will not execute any jobs (but they still can schedule/modify/delete jobs by communicating with the jobstore!).
To do this, just make sure you call BackgroundScheduler.start()
in whatever function/module instantiates your app. For instance, in the following Django project structure, we'd (likely) want to execute this code in wsgi.py
, which is the entry point for the UWSGI server.:
mysite/
manage.py
mysite/
__init__.py
settings.py
urls.py
wsgi.py
Pitfalls:
Don't "initializ[e] apscheduler in urls of the django application.... This will start the scheduler when application starts." These may be loaded by each worker, and thus start()
is executed multiple times.
Don't start the UWSGI server in "lazy-app" mode, this will load the app in each of the workers, after they are created.
Don't run the BackgroundScheduler with the default (memory) jobstore. This will create split-brain syndrome between all workers. You want to enforce a single-point-of-truth, like you are with MongoDB, for all CRUD operations performed on jobs.
This post may give you more detail, only in a Gunicorn (WSGI server) environment.
来源:https://stackoverflow.com/questions/39253537/apscheduler-is-executing-job-multiple-times