I am using celery==4.1.0
and django-celery-beat==1.1.0
.
I am running gunicorn + celery + rabbitmq with Django.
This is my config fo
You are starting 2 new workers for every deployment without stopping/killing the previous workers.
During deployment, stop the existing workers with
kill -9 $PID
kill -9 `cat /var/run/myProcess.pid`
Alternatively, you can just kill all the workers with
pkill -9 celery
Now you can start workers as usual.
celery -A myproject beat -l info -f /var/log/celery/celery.log --detach
celery -A myproject worker -l info -f /var/log/celery/celery.log --detach
rabbitmq may be to blame for high memory usage. Can you safely restart rabbit?
Also can you confirm that after a restart there is the expected amount of workers?