celery

How to find RabbitMQ URL?

百般思念 提交于 2021-02-07 03:21:52
问题 Rabbit MQ URL looks like : BROKER_URL: "amqp://user:password@remote.server.com:port//vhost" This is not clear where we can find the URL, login and password of RabbitMQ when we need to acccess from remote worker (outside of Localhost). In other way, how to set RabbitMQ IP adress, login and password from Celery / RabbitMQ 回答1: You can create new user for accessing your RabbitMQ broker. Normally port used is 5672 but you can change it in your configuration file. So suppose your IP is 1.1.1.1 and

How to find RabbitMQ URL?

不羁的心 提交于 2021-02-07 03:19:34
问题 Rabbit MQ URL looks like : BROKER_URL: "amqp://user:password@remote.server.com:port//vhost" This is not clear where we can find the URL, login and password of RabbitMQ when we need to acccess from remote worker (outside of Localhost). In other way, how to set RabbitMQ IP adress, login and password from Celery / RabbitMQ 回答1: You can create new user for accessing your RabbitMQ broker. Normally port used is 5672 but you can change it in your configuration file. So suppose your IP is 1.1.1.1 and

How to find RabbitMQ URL?

南楼画角 提交于 2021-02-07 03:18:12
问题 Rabbit MQ URL looks like : BROKER_URL: "amqp://user:password@remote.server.com:port//vhost" This is not clear where we can find the URL, login and password of RabbitMQ when we need to acccess from remote worker (outside of Localhost). In other way, how to set RabbitMQ IP adress, login and password from Celery / RabbitMQ 回答1: You can create new user for accessing your RabbitMQ broker. Normally port used is 5672 but you can change it in your configuration file. So suppose your IP is 1.1.1.1 and

Django Celery Beat admin updating Cron Schedule Periodic task not taking effect

半世苍凉 提交于 2021-02-06 20:51:28
问题 I'm running a site using Django 10, RabbitMQ, and Celery 4 on CentOS 7. My Celery Beat and Celery Worker instances are controlled by supervisor and I'm using the django celery database scheduler. I've scheduled a cron style task using the cronsheduler in Django-admin. When I start celery beat and worker instances the job fires as expected. But if a change the schedule time in Django-admin then the changes are not picked up unless I restart the celery-beat instance. Is there something I am

Django Celery Beat admin updating Cron Schedule Periodic task not taking effect

扶醉桌前 提交于 2021-02-06 20:48:21
问题 I'm running a site using Django 10, RabbitMQ, and Celery 4 on CentOS 7. My Celery Beat and Celery Worker instances are controlled by supervisor and I'm using the django celery database scheduler. I've scheduled a cron style task using the cronsheduler in Django-admin. When I start celery beat and worker instances the job fires as expected. But if a change the schedule time in Django-admin then the changes are not picked up unless I restart the celery-beat instance. Is there something I am

Django Celery Beat admin updating Cron Schedule Periodic task not taking effect

拈花ヽ惹草 提交于 2021-02-06 20:47:03
问题 I'm running a site using Django 10, RabbitMQ, and Celery 4 on CentOS 7. My Celery Beat and Celery Worker instances are controlled by supervisor and I'm using the django celery database scheduler. I've scheduled a cron style task using the cronsheduler in Django-admin. When I start celery beat and worker instances the job fires as expected. But if a change the schedule time in Django-admin then the changes are not picked up unless I restart the celery-beat instance. Is there something I am

Celery worker hangs without any error

ⅰ亾dé卋堺 提交于 2021-02-05 13:21:28
问题 I have a production setup for running celery workers for making a POST / GET request to remote service and storing result, It is handling load around 20k tasks per 15 min. The problem is that the workers go numb for no reason, no errors, no warnings. I have tried adding multiprocessing also, the same result. In log I see the increase in the time of executing task, like succeeded in s For more details look at https://github.com/celery/celery/issues/2621 回答1: If your celery worker get stuck

how to detect failure and auto restart celery worker

只愿长相守 提交于 2021-02-05 08:35:59
问题 I use Celery and Celerybeat in my django powered website. the server OS is Ubuntu 16.04. by using celerybeat, a job is done by a celery worker every 10 minutes. sometimes the worker shuts down without any useful log messages or errors. So, I want to find a way in order to detect status (On/Off) of celery worker (not Beat), and if it's stopped, restart it automatically. how can I do that? thanks 回答1: In production, you should run Celery, Beat, your APP server etc. as daemons [1] using

Celery groups and chains

蹲街弑〆低调 提交于 2021-02-04 16:38:05
问题 I need to sort some tasks in Celery that some of them should as a single task and some should work parallel and when the tasks in the group completed, it should pass the next one: chain( task1.s(), task2.s(), group(task3.s(), task4.s()), group(task5.s(), task6.s(), task7.s()), task7.s() ).delay() But I think what did I do is wrong. Any body have idea how to do it? Also, I don't care about sending the result of each task to the others. 回答1: This sounds like a chord, ie where you execute tasks

Celery - minimize memory consumption

牧云@^-^@ 提交于 2021-02-04 12:18:06
问题 We have ~300 celeryd processes running under Ubuntu 10.4 64-bit , in idle every process takes ~19mb RES, ~174mb VIRT, thus - it's around 6GB of RAM in idle for all processes. In active state - process takes up to 100mb of RES and ~300mb VIRT Every process uses minidom(xml files are < 500kb, simple structure) and urllib. Quetions is - how can we decrease RAM consuption - at least for idle workers, probably some celery or python options may help? How to determine which part takes most of memory