What is required for reliable task processing in Celery when using Redis?

 ̄綄美尐妖づ 提交于 2021-01-29 10:13:28

问题


We are looking to run Celery/Redis in a kubenetes cluster, and currently do not have Redis persistence enabled (everything is in-memory). I am concerned about: Redis restarts (losing in-memory data), worker restarts/outages (due to crashes and/or pod scheduling), and transient network issues.

When using Celery to do task processing using Redis, what is required to ensure that tasks are reliable?


回答1:


On the redis side, just make sure that you are using the backup features:

https://redis.io/topics/persistence

How to recover redis data from snapshot(rdb file) copied from another machine?

On the celery side, make sure your tasks are idempotent. If they are re-submitted that are only run once.

If a task is in the middle of processing and there is a re-start. Then hopefully when redis and the app are backup, celery will see an incomplete task and try to schedule it again.




回答2:


In order to make your Celery cluster be more robust when using Redis as a broker (and result backend) I recommend using one (or more) replicas. Unfortunately redis-py does not yet have support for clustered Redis, but that is just a matter of time. In the replicated mode, when the master server goes down, replica takes its place and this is (almost) entirely transparent. Celery also supports Redis sentinels.

Celery became much more robust over the years in terms of ensuring that tasks get redelivered in some critical cases. If the task failed because the worker is lost (there is a configuration parameter for it), some exception was thrown, etc - it will be redelivered, and executed again.



来源:https://stackoverflow.com/questions/56978831/what-is-required-for-reliable-task-processing-in-celery-when-using-redis

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!