apscheduler

RuntimeError: There is no current event loop in thread in async + apscheduler

戏子无情 提交于 2019-12-03 03:42:15
问题 I have a async function and need to run in with apscheduller every N minutes. There is a python code below URL_LIST = ['<url1>', '<url2>', '<url2>', ] def demo_async(urls): """Fetch list of web pages asynchronously.""" loop = asyncio.get_event_loop() # event loop future = asyncio.ensure_future(fetch_all(urls)) # tasks to do loop.run_until_complete(future) # loop until done async def fetch_all(urls): tasks = [] # dictionary of start times for each url async with ClientSession() as session: for

APScheduler (重点)

匿名 (未验证) 提交于 2019-12-02 23:56:01
需求: mysql和redis两个系统, mysql增加数据成功, redis未必添加成功, 这样两个系统的数据可能出现偏差, 所以需要定期对mysql和redis的数据进行同步 解决方案: 每天执行一次定时任务, 让mysql数据和redis数据进行同步 crontab 是linux系统一个内置命令, 依赖于linux系统, 无动态管理任务(取消/暂停/修改任务配置) 使用场景: 适合于普通的静态任务 apscheduler 独立的定时器程序, 可以方便的管理定时任务 使用场景: 需要动态生成/管理任务, 如下单后30分钟可有效期 安装 pip install apscheduler 支持三种触发器 date 只执行一次 interval 周期执行 参数 时间间隔 cron 周期执行 参数时间 来源:博客园 作者: 太虚真人 链接:https://www.cnblogs.com/oklizz/p/11431871.html

Tornado集成Apscheduler定时任务

匿名 (未验证) 提交于 2019-12-02 22:51:08
熟悉Python的人可能都知道,Apscheduler是python里面一款非常优秀的任务调度框架,这个框架是从鼎鼎大名的Quartz移植而来。 之前有用过Flask版本的Apscheduler做定时任务。刚好前不久接触了Tornado,顺便玩玩Tornado版本的Apscheduler。 本篇做了一个简单的Wdb页面,用于添加和删除定时任务,小伙伴们可以基于这个做一些扩展,比如把定时定时任务写入数据库,改变cron规则等等。 # 新增任务 (需要动态改变job_id的值) http://localhost:8888/scheduler?job_id=1&action=add # 删除任务 (需要动态改变job_id的值) http://localhost:8888/scheduler?job_id=1&action=remov 执行结果可以在console看到 from datetime import datetime from tornado.ioloop import IOLoop, PeriodicCallback from tornado.web import RequestHandler, Application from apscheduler.schedulers.tornado import TornadoScheduler scheduler = None

How to run recurring task in the Python Flask framework?

独自空忆成欢 提交于 2019-12-02 18:34:13
I'm building a website which provides some information to the visitors. This information is aggregated in the background by polling a couple external APIs every 5 seconds. The way I have it working now is that I use APScheduler jobs. I initially preferred APScheduler because it makes the whole system more easy to port (since I don't need to set cron jobs on the new machine). I start the polling functions as follows: from apscheduler.scheduler import Scheduler @app.before_first_request def initialize(): apsched = Scheduler() apsched.start() apsched.add_interval_job(checkFirstAPI, seconds=5)

WARNING:apscheduler.scheduler:Execution of job skipped: maximum number of running instances reached (1)

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-01 13:44:26
问题 In my code i run a cron job which is run for every five seconds, and I've been getting the same WARNING ever since. this is the api that i used: sched.add_cron_job(test_3, second="*/5") And I get a warning: WARNING:apscheduler.scheduler:Execution of job "test_3 (trigger: cron[second='*/5'], next run at: 2013-11-28 15:56:30)" skipped: maximum number of running instances reached (1) I tried giving time gap of 2 minutes it doesn't solve the issue..... Help me in overcoming this issue.. 回答1: I

Python 定时调度

陌路散爱 提交于 2019-12-01 10:09:44
APScheduler APScheduler是基于Quartz的一个Python定时任务框架,实现了Quartz的所有功能,使用起来十分方便。提供了基于日期、固定时间间隔以及crontab类型的任务,并且可以持久化任务。 APScheduler提供了多种不同的调度器,方便开发者根据自己的实际需要进行使用;同时也提供了不同的存储机制,可以方便与Redis,数据库等第三方的外部持久化机制进行协同工作,总之功能非常强大和易用。 安装 使用 pip 包管理工具安装 APScheduler 是最方便快捷的。 APScheduler的主要的调度类 在APScheduler中有以下几个非常重要的概念,需要大家理解: 触发器(trigger) 包含调度逻辑,每一个作业有它自己的触发器,用于决定接下来哪一个作业会运行,根据trigger中定义的时间点,频率,时间区间等等参数设置。除了他们自己初始配置以外,触发器完全是无状态的。 作业存储(job store) 存储被调度的作业,默认的作业存储是简单地把作业保存在内存中,其他的作业存储是将作业保存在数据库中。一个作业的数据讲在保存在持久化作业存储时被序列化,并在加载时被反序列化。调度器不能分享同一个作业存储。job store支持主流的存储机制:redis, mongodb, 关系型数据库, 内存等等 执行器(executor) 处理作业的运行

How do I schedule an interval job with APScheduler?

蓝咒 提交于 2019-12-01 04:06:25
问题 I'm trying to schedule an interval job with APScheduler (v3.0.0). I've tried: from apscheduler.schedulers.blocking import BlockingScheduler sched = BlockingScheduler() def my_interval_job(): print 'Hello World!' sched.add_job(my_interval_job, 'interval', seconds=5) sched.start() and from apscheduler.schedulers.blocking import BlockingScheduler sched = BlockingScheduler() @sched.scheduled_job('interval', id='my_job_id', seconds=5) def my_interval_job(): print 'Hello World!' sched.start()

python实现定时处理事务

 ̄綄美尐妖づ 提交于 2019-11-29 19:45:53
用python来实现定时处理事务的操作,主要使用模块apscheduler,需要下载 pip install apscheduler我工作中用到的例子是每天自动对数据库中过期的用户进行处理BackgroundScheduler调度器特别好用,非阻塞的,用django可以直接写在app应用里 apscheduler功能强大,有四大组件,下面例子只是个人用到的皮毛 例一实现每周一到周五计算1+1=2 <1>导入模块from apscheduler.schedulers.background import BackgroundScheduler <2>创建需要执行的函数def job():  a = 1 + 1  print(a)<3>创建对象 scheduler = BackgroundScheduler()<4>添加时间并设置属性每周一到周五6:30执行事件scheduler.add_job(job,'cron',day_of_week='mon-fri',hour=6,minute=30) #job事件,'cron'触发器的表示时定事件执行事件,后面参数是设置的事件,具体参数设置下面有 每天6:30都执行事件 scheduler.add_job(job,'cron',hour=6,minute=30) <5>启动scheduler.start() """每天对数据库过期用户进行处理

No trigger by the name “interval” was found

好久不见. 提交于 2019-11-29 15:40:11
I've been working with APScheduler and when attempting to run the code I get the error "No trigger by the name 'interval' was found" It was perfectly on my local machine but will work on my cloud machine. I have tried: reinstalling apscheduler via pip, easy_install, and manually; upgrading setuptools; upgrading all dependencies. Edit: Code if __name__ == '__main__': scheduler = BlockingScheduler() scheduler.add_job(SMS, 'interval', minutes=1) scheduler.start() print Run Complete try: # This is here to simulate application activity (which keeps the main thread alive). while True: time.sleep(2)

Locking a method in Python?

耗尽温柔 提交于 2019-11-29 11:38:26
Here is my problem: I'm using APScheduler library to add scheduled jobs in my application. I have multiple jobs executing same code at the same time, but with different parameters. The problem occurs when these jobs access the same method at the same time which causes my program to work incorrectly. I wanna know if there is a way to lock a method in Python 3.4 so that only one thread may access it at a time? If so, could you please post a simple example code? Thanks. You can use a basic python locking mechanism: from threading import Lock lock = Lock() ... def foo(): lock.acquire() try: # only