Airflow 1.9.0 is queuing but not launching tasks

时间秒杀一切 提交于 2019-11-26 16:19:20

Airflow can be a bit tricky to setup.

  • Do you have the airflow scheduler running?
  • Do you have the airflow webserver running?
  • Have you checked that all DAGs you want to run are set to On in the web ui?
  • Do all the DAGs you want to run have a start date which is in the past?
  • Do all the DAGs you want to run have a proper schedule which is shown in the web ui?
  • If nothing else works, you can use the web ui to click on the dag, then on Graph View. Now select the first task and click on Task Instance. In the paragraph Task Instance Details you will see why a DAG is waiting or not running.

I've had for instance a DAG which was wrongly set to depends_on_past: True which forbid the current instance to start correctly.

Also a great resource directly in the docs, which has a few more hints: Why isn't my task getting scheduled?.

I'm running a fork of the puckel/docker-airflow repo as well, mostly on Airflow 1.8 for about a year with 10M+ task instances. I think the issue persists in 1.9, but I'm not positive.

For whatever reason, there seems to be a long-standing issue with the Airflow scheduler where performance degrades over time. I've reviewed the scheduler code, but I'm still unclear on what exactly happens differently on a fresh start to kick it back into scheduling normally. One major difference is that scheduled and queued task states are rebuilt.

Scheduler Basics in the Airflow wiki provides a concise reference on how the scheduler works and its various states.

Most people solve the scheduler diminishing throughput problem by restarting the scheduler regularly. I've found success at a 1-hour interval personally, but have seen as frequently as every 5-10 minutes used too. Your task volume, task duration, and parallelism settings are worth considering when experimenting with a restart interval.

For more info see:

This used to be addressed by restarting every X runs using the SCHEDULER_RUNS config setting, although that setting was recently removed from the default systemd scripts.

You might also consider posting to the Airflow dev mailing list. I know this has been discussed there a few times and one of the core contributors may be able to provide additional context.

Related Questions

I am facing the issue today and found that bullet point 4 from tobi6 answer below worked out and resolved the issue

*'Do all the DAGs you want to run have a start date which is in the past?'*

I am using airflow version v1.10.3

My problem was one step further, in addition to my tasks being queued, I couldn't see any of my celery workers on the Flower UI. The solution was that, since I was running my celery worker as root I had to make changes in my ~/.bashrc file.

The following steps made it work:

  1. Add export C_FORCE_ROOT=true to your ~/.bashrc file
  2. source ~/.bashrc
  3. Run worker : nohup airflow worker $* >> ~/airflow/logs/worker.logs &

Check your Flower UI at http://{HOST}:5555

One more thing to check is whether "the concurrency parameter of your DAG reached?".

I'd experienced the same situation when some task was shown as NO STATUS.

It turned out that my File_Sensor tasks were run with timeout set up to 1 week, while DAG time out was only 5 hours. That leaded to the case when the Files were missing, many sensors tasked were running at the same time. Which results the concurrency overloaded!

The depending tasks couldn't be started before the sensor task succeed, when the dag timeout, they got NO STATUS.

My solution:

  • Carefully set tasks and DAG timeout
  • Increase dag_concurrency in airflow.cfg file in AIRFLOW_HOME folder.

Please refer to the docs. https://airflow.apache.org/faq.html#why-isn-t-my-task-getting-scheduled

I also had a similar issue, but it is mostly related to SubDagOperator with more than 3000 task instances in total (30 tasks * 44 subdag tasks).

What I found out is that airflow scheduler mainly responsible for putting your scheduled tasks in to "Queued Slots" (pool), while airflow celery workers is the one who pick up your queued task and put it into the "Used Slots" (pool) and run it.

Based on your description, your scheduler should work fine. I suggest you check your "celery workers" log to see whether there is any error, or restart it to see whether it helps or not. I experienced some issues that celery workers normally go on strike for a few minutes then start working again (especially on SubDagOperator)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!