Restart celery beat and worker during Django deployment

后端 未结 2 1645
太阳男子
太阳男子 2021-01-17 04:42

I am using celery==4.1.0 and django-celery-beat==1.1.0.

I am running gunicorn + celery + rabbitmq with Django.

This is my config fo

相关标签:
2条回答
  • 2021-01-17 05:25

    You are starting 2 new workers for every deployment without stopping/killing the previous workers.

    During deployment, stop the existing workers with

    kill -9 $PID
    kill -9 `cat /var/run/myProcess.pid`
    

    Alternatively, you can just kill all the workers with

    pkill -9 celery
    

    Now you can start workers as usual.

    celery -A myproject beat -l info -f /var/log/celery/celery.log --detach
    celery -A myproject worker -l info -f /var/log/celery/celery.log --detach
    
    0 讨论(0)
  • 2021-01-17 05:30

    rabbitmq may be to blame for high memory usage. Can you safely restart rabbit?

    Also can you confirm that after a restart there is the expected amount of workers?

    0 讨论(0)
提交回复
热议问题