How to safely restart Airflow and kill a long-running task?
问题 I have Airflow is running in Kubernetes using the CeleryExecutor. Airflow submits and monitors Spark jobs using the DatabricksOperator. My streaming Spark jobs have a very long runtime (they run forever unless they fail or are cancelled). When pods for Airflow worker are killed while a streaming job is running, the following happens: Associated task becomes a zombie (running state, but no process with heartbeat) Task is marked as failed when Airflow reaps zombies Spark streaming job continues