问题
Our Python server (Django 1.11.17) uses Celery 4.2.1 with Redis as the broker (the pip redis package we're using is 3.0.1). The Django app is deployed to Heroku, and the Celery broker was set up using Heroku's Redis Cloud add-on.
The Celery tasks we have should definitely have completed within a minute (median completion time is ~100 ms), but we're seeing that Redis keys and connections are persisting for much, much longer than that (up to 24 hours). Otherwise, tasks are being executed correctly.
What can be happening that's causing these persisting keys and connections in our Redis broker? How can we clear them when Celery tasks conclude?
Here's a Redis Labs screenshot of this happening (all tasks should have completed, so we'd expect zero keys and zero connections):
回答1:
Resolved my own question: if the CELERY_IGNORE_RESULT
config variable is set to True
(which I'm able to do because I don't use any return values from my tasks), then the keys and connections are back under control.
Source: Celery project documentation
来源:https://stackoverflow.com/questions/54227369/celery-with-redis-broker-in-django-tasks-successfully-execute-but-too-many-per