I\'m using Airflow v1.8.1 and run all components (worker, web, flower, scheduler) on kubernetes & Docker. I use Celery Executor with Redis and my tasks are looks like:
I have been working on the same docker image puckel. My issue was resolved by :
Replacing
result_backend = db+postgresql://airflow:airflow@postgres/airflow
with
celery_result_backend = db+postgresql://airflow:airflow@postgres/airflow
which I think is updated in the latest pull by puckel. The change was reverted around in Feb 2018 and your comment was made in January.
Please try airflow scheduler
, airflow worker
command.
I think airflow worker
calls each task, airflow scheduler
calls between two tasks.
We have a solution and want to share it here before 1.9 becomes official. Thanks for Bolke de Bruin updates on 1.9. in my situation before 1.9, currently we are using 1.8.1 is to have another DAG running to clear the task in queue state
if it stays there for over 30 mins.
Tasks getting stuck is, most likely, a bug. At the moment (<= 1.9.0alpha1) it can happen when a task cannot even start up on the (remote) worker. This happens for example in the case of an overloaded worker or missing dependencies.
This patch should resolve that issue.
It is worth investigating why your tasks do not get a RUNNING state. Setting itself to this state is first thing a task does. Normally the worker does log before it starts executing and it also reports and errors. You should be able to find entries of this in the task log.
edit: As was mentioned in the comments on the original question in case one example of airflow not being able to run a task is when it cannot write to required locations. This makes it unable to proceed and tasks would get stuck. The patch fixes this by failing the task from the scheduler.