django-celery

Django celery running only two tasks at once?

℡╲_俬逩灬. 提交于 2019-12-24 05:32:20
问题 I have a celery task like this: @celery.task def file_transfer(password, source12, destination): result = subprocess.Popen(['sshpass', '-p', password, 'rsync', '-avz', source12, destination], stderr=subprocess.PIPE, stdout=subprocess.PIPE).communicate()[0] return result I have called in a Djagno view. User can select more than one file to copy to the destination. For example if the user selects, 4 files at once, celery accept only 2 tasks. What's wrong? 回答1: Have you checked the concurrency

Django celery running only two tasks at once?

本小妞迷上赌 提交于 2019-12-24 05:32:13
问题 I have a celery task like this: @celery.task def file_transfer(password, source12, destination): result = subprocess.Popen(['sshpass', '-p', password, 'rsync', '-avz', source12, destination], stderr=subprocess.PIPE, stdout=subprocess.PIPE).communicate()[0] return result I have called in a Djagno view. User can select more than one file to copy to the destination. For example if the user selects, 4 files at once, celery accept only 2 tasks. What's wrong? 回答1: Have you checked the concurrency

Is it possible to query state of a celery tasks using django-celery-results during the execution of a task?

[亡魂溺海] 提交于 2019-12-23 19:59:21
问题 I am using Celery + RabbitMQ for queuing tasks in my Django App, I want to track the state of a task using the task_id and the task_state. For that i created a TaskModel(Model) to store the task_id, task_state and some additional data in the database. On task execution, a new TaskModel object is save and updated as the task progresses. Everything is working fine. However, i still need to add a lot of functionality and features and error protections etc. That's when i remembered the celery

tracking progress of a celery.group task?

和自甴很熟 提交于 2019-12-23 17:08:44
问题 @celery.task def my_task(my_object): do_something_to_my_object(my_object) #in the code somewhere tasks = celery.group([my_task.s(obj) for obj in MyModel.objects.all()]) group_task = tasks.apply_async() Question: Does celery have something to detect the progress of a group task? Can I get the count of how many tasks were there and how many have been processed? 回答1: tinkering around on the shell (ipython's tab auto-completion) I found that group_task (which is a celery.result.ResultSet object)

celery task eta is off, using rabbitmq

眉间皱痕 提交于 2019-12-23 11:44:54
问题 I've gotten Celery tasks happening ok, using the default settings in the tutorials and rabbitmq running on ubuntu. All is fine when I schedule a task with no delay, but when I give them an eta, they get scheduled in the future as if my clock is off somewhere. Here is some python code that is asking for tasks: for index, to_address in enumerate(email_addresses): # schedule one email every two seconds delay = index * 2 log.info("MessageUsersFormView.process_action() scheduling task," "email to

Celery workers missing heartbeats and getting substantial drift over Ec2

一世执手 提交于 2019-12-23 11:01:39
问题 I am testing my celery implementation over 3 ec2 machines right now. I am pretty confident in my implementation now, but I am getting problems with the actual worker execution. My test structure is as follows: 1 ec2 machine is designated as the broker, also runs a celery worker 1 ec2 machine is designated as the client (runs the client celery script that enqueues all the tasks using .delay(), also runs a celery worker 1 ec2 machine is purely a worker. All the machines have 1 celery worker

Celery workers missing heartbeats and getting substantial drift over Ec2

依然范特西╮ 提交于 2019-12-23 11:01:26
问题 I am testing my celery implementation over 3 ec2 machines right now. I am pretty confident in my implementation now, but I am getting problems with the actual worker execution. My test structure is as follows: 1 ec2 machine is designated as the broker, also runs a celery worker 1 ec2 machine is designated as the client (runs the client celery script that enqueues all the tasks using .delay(), also runs a celery worker 1 ec2 machine is purely a worker. All the machines have 1 celery worker

Django-celery : Passing request Object to worker

若如初见. 提交于 2019-12-23 09:37:51
问题 How can i pass django request object to celery worker. When try to pass the request object it throws a Error Can't Pickle Input Objects It seems that celery serialize any arguments passed to worker. I tried using other serialization methods like JSON. CELERY_TASK_SERIALIZER = "JSON" But it is not working. Is it possible to configure celery so that it won't serialize data. Or can i convert request object to a string before passing to worker and then convert again back to object in worker.

Celery result.get times out

白昼怎懂夜的黑 提交于 2019-12-23 04:42:53
问题 I have two different django projects say projA and projB , each have its own celery daemon running on separate queues but same vhost, projA have a task taskA and projB have a task taskB , I try to run taskB from inside taskA e.g. @task(routing_key='taskA') def taskA(event_id): # do some work , then call taskB and wait for result result = send_task('taskB',routing_key='taskB') res = result.get(timeout=20) I can see in logs of projB that taskB finished within a second, but taskA keeps on

Django Celery and multiple databases (Celery, Django and RabbitMQ)

北城以北 提交于 2019-12-22 11:31:41
问题 Is it possible to set a different database to be used with Django Celery? I have a project with multiple databases in configuration and don't want Django Celery to use the default one. I will be nice if I can still use django celery admin pages and read results stored in this different database :) 回答1: It should be possible to set up a separate database for the django-celery models using Django database routers: https://docs.djangoproject.com/en/1.4/topics/db/multi-db/#automatic-database