celery

Celery with Redis broker in Django: tasks successfully execute, but too many persistent Redis keys and connections remain

感情迁移 提交于 2021-01-29 11:28:07
问题 Our Python server (Django 1.11.17) uses Celery 4.2.1 with Redis as the broker (the pip redis package we're using is 3.0.1). The Django app is deployed to Heroku, and the Celery broker was set up using Heroku's Redis Cloud add-on. The Celery tasks we have should definitely have completed within a minute (median completion time is ~100 ms), but we're seeing that Redis keys and connections are persisting for much, much longer than that (up to 24 hours). Otherwise, tasks are being executed

celery chord with group of chains with exception in chain

a 夏天 提交于 2021-01-29 10:44:21
问题 I am using celery to run a chord with a group of chains. When all tasks (chains...) in the group complete successfully, the chord callback is fired and things work as I expect them to. However, when a task in the group fails, in which case I do not expect the chord callback to be called, chord_unlock loops endlessly. How do I avoid the chord_unlock loop in case of failure of one of the chains in the group? Here is my code: @app.task def test1(): logging.info("test1") raise Exception() @app

Celery worker - ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

穿精又带淫゛_ 提交于 2021-01-29 10:43:05
问题 Celery connection breaks when long running task is executed. I am using amqp as broker and redis as backend. Celery is running on remote host, the command I am using for celery worker is as follows pipenv run celery worker -A src.celery_app -l info -P gevent --without-mingle --without-heartbeat --without-gossip -Q queue1 -n worker1 The error log is as follows:- [2020-09-09 14:26:46,820: CRITICAL/MainProcess] Couldn't ack 1, reason:ConnectionResetError(10054, 'An existing connection was

What is required for reliable task processing in Celery when using Redis?

 ̄綄美尐妖づ 提交于 2021-01-29 10:13:28
问题 We are looking to run Celery/Redis in a kubenetes cluster, and currently do not have Redis persistence enabled (everything is in-memory). I am concerned about: Redis restarts (losing in-memory data), worker restarts/outages (due to crashes and/or pod scheduling), and transient network issues. When using Celery to do task processing using Redis, what is required to ensure that tasks are reliable? 回答1: On the redis side, just make sure that you are using the backup features: https://redis.io

Why doesn't celery send an email at the specified time?

喜你入骨 提交于 2021-01-29 08:35:30
问题 I sent mail an hour before the event (the model in django), checked the date check in shell, but celery does not send any mail or print to the console. I have a feeling that I didn't finish writing something, because I haven't worked with the planned actions yet. but there are no ideas. I didn't connect it anywhere shedule.py perhaps this is the reason tasks.py: @shared_task def event_send_mail(): events = Event.objects.filter(event_date=datetime.now() + timedelta(minutes=60)) for event in

Signal 11 in a celery task, and then triggering on_failure

那年仲夏 提交于 2021-01-29 07:18:57
问题 I'm having trouble debugging this and haven't made much progress. I have along running async celery task that'll sometimes hit a signal 11 (it's a recursive/CPU-bound function that can be run into stack size issues). So, for example: Process 'ForkPoolWorker-2' pid:5494 exited with 'signal 11 (SIGSEGV)' I'd like to modify my celery task, task class and request to catch this and trigger the on_failure function of the task class. I haven't had any luck though. I'm running this with a redis

Getting celery task id

落爺英雄遲暮 提交于 2021-01-29 06:37:10
问题 I have made something like that @app.task def some_task() logger.info(app.current_task.request.id) some_func() def some_func() logger.info(app.current_task.request.id) So I receive normal id inside some_task, but it equals to None inside some_func. How can I get real task id? 回答1: You could bind the task and pass the request around rather than relying on a global. @app.task(bind=True) def some_task(self) logger.info(self.request.id) some_func(self.request) def some_func(celery_request=None) #

How to run airflow with CeleryExecutor on a custom docker image

依然范特西╮ 提交于 2021-01-29 06:32:13
问题 I am adding airflow to a web application that manually adds a directory containing business logic to the PYTHON_PATH env var, as well as does additional system-level setup that I want to be consistent across all servers in my cluster. I've been successfully running celery for this application with RMQ as the broker and redis as the task results backend for awhile, and have prior experience running Airflow with LocalExecutor . Instead of using Pukel's image, I have a an entry point for a base

Celery tasks doesn't works

纵饮孤独 提交于 2021-01-28 12:37:25
问题 Celery docs say that Celery 3.1 can work with django out of box. But tasks not working. I have tasks.py: from celery import task from datetime import timedelta @task.periodic_task(run_every=timedelta(seconds=20), ignore_result=True) def disable_not_confirmed_users(): print "start" Configs: from kombu import Exchange, Queue CELERY_SEND_TASK_ERROR_EMAILS = True BROKER_URL = 'amqp://guest@localhost//' CELERY_DEFAULT_QUEUE = 'project-queue' CELERY_DEFAULT_EXCHANGE = 'project-queue' CELERY_DEFAULT

Revoke celery tasks with same args/kwargs

我的梦境 提交于 2021-01-28 08:47:24
问题 Imagine having a long running task with a specific set of args and kwargs. Is there any chance to revoke all running and pending tasks with the same args/kwargs before starting a new task as Im only interested in the result of the last added task. (The underlying data changes inbetween two calls) I tried iterating the results of inspect.active() , inspect.registered() and inspect.scheduled() to get ALL tasks and then filter/revoke those with my args and kwargs in question. But this was not