Celery worker hangs without any error

后端 未结 3 1983
梦如初夏
梦如初夏 2021-01-31 06:06

I have a production setup for running celery workers for making a POST / GET request to remote service and storing result, It is handling load around 20k tasks per 15 min.

相关标签:
3条回答
  • 2021-01-31 06:52

    I also faced the issue, when I was using delay shared_task with celery, kombu, amqp, billiard. After calling the API when I used delay() for @shared_task, all functions well but when it goes to delay it hangs up.

    So, the issue was In main Application init.py, the below settings were missing

    This will make sure the app is always imported when # Django starts so that shared_task will use this app.

    In init.py

    from __future__ import absolute_import, unicode_literals
    
    # This will make sure the app is always imported when
    # Django starts so that shared_task will use this app.
    from .celery import app as celeryApp
    
    #__all__ = ('celeryApp',)
    __all__ = ['celeryApp']
        
    

    Note1: In place of celery_app put the Aplication name, means the Application mentioned in celery.py import the App and put here

    Note2:** If facing only hangs issue in shared task above solution may solve your issue and ignore below matters.

    Also wanna mention A=another issue, If anyone facing Error 111 connection issue then please check the versions of amqp==2.2.2, billiard==3.5.0.3, celery==4.1.0, kombu==4.1.0 whether they are supporting or not. Mentioned versions are just an example. And Also check whether redis is install in your system(If any any using redis).

    Also make sure you are using Kombu 4.1.0. In the latest version of Kombu renames async to asynchronous.

    0 讨论(0)
  • 2021-01-31 07:03

    Follow this tutorial

    Celery Django Link

    Add the following to the settings

    NB Install redis for both transport and result

       # TRANSPORT
       CELERY_BROKER_TRANSPORT = 'redis'
       CELERY_BROKER_HOST = 'localhost'
       CELERY_BROKER_PORT = '6379'
       CELERY_BROKER_VHOST = '0'
    
       # RESULT
       CELERY_RESULT_BACKEND = 'redis'
       CELERY_REDIS_HOST = 'localhost'
       CELERY_REDIS_PORT = '6379'
       CELERY_REDIS_DB = '1'
    
    0 讨论(0)
  • 2021-01-31 07:11

    If your celery worker get stuck sometimes, you can use strace & lsof to find out at which system call it get stuck.

    For example:

    $ strace -p 10268 -s 10000
    Process 10268 attached - interrupt to quit
    recvfrom(5,
    

    10268 is the pid of celery worker, recvfrom(5 means the worker stops at receiving data from file descriptor.

    Then you can use lsof to check out what is 5 in this worker process.

    lsof -p 10268
    COMMAND   PID USER   FD   TYPE    DEVICE SIZE/OFF      NODE NAME
    ......
    celery  10268 root    5u  IPv4 828871825      0t0       TCP 172.16.201.40:36162->10.13.244.205:wap-wsp (ESTABLISHED)
    ......
    

    It indicates that the worker get stuck at a tcp connection(you can see 5u in FD column).

    Some python packages like requests is blocking to wait data from peer, this may cause celery worker hangs, if you are using requests, please make sure to set timeout argument.


    Have you seen this page:

    https://www.caktusgroup.com/blog/2013/10/30/using-strace-debug-stuck-celery-tasks/

    0 讨论(0)
提交回复
热议问题