celerybeat

Can celery celerybeat use a Database Scheduler without Django?

我们两清 提交于 2019-12-05 01:47:32
问题 I have a small infrastructure plan that does not include Django. But, because of my experience with Django, I really like Celery. All I really need is Redis + Celery to make my project. Instead of using the local filesystem, I'd like to keep everything in Redis. My current architecture uses Redis for everything until it is ready to dump the results to AWS S3. Admittedly I don't have a great reason for using Redis instead of the filesystem. I've just invested so much into architecting this

Correct setup of django redis celery and celery beats

ⅰ亾dé卋堺 提交于 2019-12-05 00:09:41
I have being trying to setup django + celery + redis + celery_beats but it is giving me trouble. The documentation is quite straightforward, but when I run the django server, redis, celery and celery beats, nothing gets printed or logged (all my test task does its log something). This is my folder structure: - aenima - aenima - __init__.py - celery.py - criptoball - tasks.py celery.py looks like this: from __future__ import absolute_import, unicode_literals import os from django.conf import settings from celery import Celery # set the default Django settings module for the 'celery' program. os

celerybeat - multiple instances & monitoring

孤街醉人 提交于 2019-12-04 11:17:04
问题 I'm having application built using celery and recently we got a requirement to run certain tasks on schedule. I think celerybeat is perfect for this, but I got few questions: Is it possible to run multiple celerybeat instances, so that tasks are not duplicated? How to make sure that celerybeat is always up & running? So far I read this: https://github.com/celery/celery/issues/251 and https://github.com/ybrs/single-beat It looks like a single instance of celerybeat should be running. I'm

Daemonize Celerybeat in Elastic Beanstalk(AWS)

空扰寡人 提交于 2019-12-04 05:19:40
I am trying to run celerybeat as a daemon in Elastic beanstalk. Here is my config file: files: "/opt/python/log/django.log": mode: "000666" owner: ec2-user group: ec2-user content: | # Log file encoding: plain "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh": mode: "000755" owner: root group: root content: | #!/usr/bin/env bash # Get django environment variables celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'` celeryenv=${celeryenv%?} # Create

Replacing Celerybeat with Chronos

风格不统一 提交于 2019-12-04 03:29:26
How mature is Chronos ? Is it a viable alternative to scheduler like celery-beat? Right now our scheduling implements a periodic "heartbeat" task that checks of "outstanding" events and fires them if they are overdue. We are using python-dateutil 's rrule for defining this. We are looking at alternatives to this approach, and Chronos seems a very attactive alternative: 1) it would mitigate the necessity to use a heartbeat schedule task, 2) it supports RESTful submission of events with ISO8601 format, 3) has a useful interface for management, and 4) it scales. The crucial requirement is that

Celery Beat: Limit to single task instance at a time

南笙酒味 提交于 2019-12-03 14:16:26
I have celery beat and celery (four workers) to do some processing steps in bulk. One of those tasks is roughly along the lines of, "for each X that hasn't had a Y created, create a Y." The task is run periodically at a semi-rapid rate (10sec). The task completes very quickly. There are other tasks going on as well. I've run into the issue multiple times in which the beat tasks apparently become backlogged, and so the same task (from different beat times) are executed simultaneously, causing incorrectly duplicated work. It also appears that the tasks are executed out-of-order. Is it possible

Work around celerybeat being a single point of failure

家住魔仙堡 提交于 2019-12-03 05:04:21
问题 I'm looking for recommended solution to work around celerybeat being a single point of failure for celery/rabbitmq deployment. I didn't find anything that made sense so far, by searching the web. In my case, once a day timed scheduler kicks off a series of jobs that could run for half a day or longer. Since there can only be one celerybeat instance, if something happens to it or the server that it's running on, critical jobs will not be run. I'm hoping there is already a working solution for

How to programmatically generate celerybeat entries with celery and Django

≯℡__Kan透↙ 提交于 2019-12-02 18:31:15
I am hoping to be able to programmatically generate celerybeat entries and resync celerybeat when entries are added. The docs here state By default the entries are taken from the CELERYBEAT_SCHEDULE setting, but custom stores can also be used, like storing the entries in an SQL database. So I am trying to figure out which classes i need to extend to be able to do this. I have been looking at celery scheduler docs and djcelery api docs but the documentation on what some of these methods do is non-existent so about to dive into some source and was just hoping someone could point me in the right

Celery dies with DBPageNotFoundError

懵懂的女人 提交于 2019-12-01 17:16:32
问题 I have 3 machines with celery workers and rabbitmq as a broker, one worker is running with beat flag, all of this is managed by supervisor, and sometimes celery dies with such error. This error appears only on beat worker, but when it appears, workers on all machines dies. (celery==3.1.12, kombu==3.0.20) [2014-07-05 08:37:04,297: INFO/MainProcess] Connected to amqp://user:**@192.168.15.106:5672// [2014-07-05 08:37:04,311: ERROR/Beat] Process Beat Traceback (most recent call last): File "/var

Celery dies with DBPageNotFoundError

孤街醉人 提交于 2019-12-01 17:04:27
I have 3 machines with celery workers and rabbitmq as a broker, one worker is running with beat flag, all of this is managed by supervisor, and sometimes celery dies with such error. This error appears only on beat worker, but when it appears, workers on all machines dies. (celery==3.1.12, kombu==3.0.20) [2014-07-05 08:37:04,297: INFO/MainProcess] Connected to amqp://user:**@192.168.15.106:5672// [2014-07-05 08:37:04,311: ERROR/Beat] Process Beat Traceback (most recent call last): File "/var/projects/env/local/lib/python2.7/site-packages/billiard/process.py", line 292, in _bootstrap self.run()