running django worker and daphne in docker container

后端 未结 1 1949
悲&欢浪女
悲&欢浪女 2021-02-11 02:33

I have django application that run in docker container. Recently i figured out that i\'m going to need to add websockets interface to my application. I\'m using channels with da

1条回答
  •  北海茫月
    2021-02-11 03:14

    This is an old question, but I figured I will answer it anyway, because I recently faced the same issue and thought I can shed some light on this.

    How Django channels work

    Django Channels is another layer on top of Django and it has two process types:

    • One that accepts HTTP/Websockets
    • One that runs Django views, Websocket handlers, background tasks, etc

    Basically, when a request comes in, it first hits the interface server (Daphne), which accepts the HTTP/Websocket connection and puts it on the Redis queue. The worker (consumer) then sees it, takes it off the queue and runs the view logic (e.g. Django views, WS handlers, etc).

    Why it didn't work for you

    Because you only run the worker (consumer) and it's blocking the execution of the interface server (producer). Meaning, that no connections will be accepted and worker is just staring at an empty redis queue.

    How I made it work

    I run Daphne, redis and workers as separate containers for easy scaling. DB migrations, static file collection, etc are executed only in Daphne container. This container will only have one instance running to ensure that there are no parallel db migrations running.

    Workers on the other hand can be scaled up and down to deal with the incoming traffic.

    How you could make it work

    Split your setup into at least two containers. I wouldn't recommend running everything in one container (using Supervisor for example). Why? Because when the time comes to scale the setup there's no easy way to do it. You could scale your container to two instances, but that just creates another supervisor with daphne, redis, django in it... if you split the worker from daphne, you could easily scale the worker container to deal with growing incoming requests.

    One container could run:

    #!/usr/bin/env bash
    
    python wait_for_postgres.py
    python manage.py migrate
    python manage.py collectstatic --no-input
    
    daphne team_up.asgi:channel_layer --port 8000 -b 0.0.0.0
    

    while the other one:

    #!/usr/bin/env bash
    
    python wait_for_postgres.py
    python manage.py runworker --only-channels=http.* --only-channels=websocket.* -v2
    

    The 'makemigrations' command

    There is no need to run the command in the script you provided, if anything it could block the whole thing because of some question it is awaiting input for (e.g. "Did you rename column X to Y?").

    Instead, you can execute it in a running container like this:

    docker exec -it  python manage.py makemigrations
    

    0 讨论(0)
提交回复
热议问题