blog

Man digging by Ayman Nouas from Pexels

Django Channels: From The Ground Up – Part 2

by

Last time, we decided to embark on a brave new adventure and give our Django framework a big upgrade with the inclusion of Django Channels. We got just far enough to get the development server running, but while this may be an adequate start, it’s better to develop against something like what we intend to deploy, right?

So, let’s go the rest of the way and get ready to develop against something that at least resembles a standard production-ready environment with Django Channels.

Post-Receive Redux

Since we’re transitioning away from the Django development server, we’ll want to alter our post-receive scripts to tie into supervisorctl instead. So, let’s edit our post-receive script like so:

Ubuntu (bash) example:

#!/bin/bash
GIT_WORK_TREE=/home/web/www/ git checkout -f
source /home/web/venv/bin/activate
pushd /home/web/www/
# Install python libs via pip and perform database migrations
pip install --upgrade -r requirements.txt
python manage.py migrate
popd
deactivate
supervisorctl restart server_workers
supervisorctl restart server_interface

FreeBSD (csh) example:

#!/bin/csh
env GIT_WORK_TREE=/home/web/www/ git checkout -f
source /home/web/venv/bin/activate.csh
pushd /home/web/www/
pip install --upgrade -r requirements.txt
python manage.py migrate
popd
deactivate
supervisorctl restart server_workers
supervisorctl restart server_interface

We can also work in things like running npm and grunt in there to build and minify files, or whatever other processes you need to do when pushing new code. This will likely be a little different for every project.

 

Postgresql Setup

Now, let’s get our main database up and ready for our application. Up to this point, we’ve been relying on sqlite, which is a good starting place, but no match for a world-class hitter like PostgreSQL.

We’ll su in as the standard postgres user, and pass user and database creation commands to psql:

vsudo su - postgres <<EOF
echo "CREATE ROLE web LOGIN ENCRYPTED PASSWORD '';" | psql
createdb "your_postgres_db" --owner "web"
echo "GRANT ALL PRIVILEGES ON DATABASE your_postgres_db TO web" | psql
service postgresql reload
EOF

And we should be good to go.

 

Setup Supervisord

Okay, we’re getting closer – now we’re going to get supervisor to not only start our programs when it runs, but allow our regular web user to start and stop the programs we’re setting up withsupervisorctl.

Supervisord is an important part of the puzzle, as it will keep our server interface and workers up and running. If workers fail or die – which is certainly a possibility – our interface will simply sit and hang, waiting for workers to hand it something to do. Inversely, nothing going in or out of the workers is going to get back to your users without the interface, so it’s important we have something monitoring both of them.

We’ll need to open up the supervisord.conf file, or add some new files which will be imported into said conf file – on Ubuntu, for example, we can locate our new conf files at /etc/supervisor/conf.d. On FreeBSD, we might just open up the conf file itself – be careful, the install instruction tell us about /etc/supervisord.conf, but their may also be another conf file at /usr/local/etc/ that overrides it.

Either way, we’re going to be adding two new program entries – one for the server workers (which handle the requests in the background) and one for the server interface (which consumes the requests and responses from clients and workers alike).

[program:server_workers]
command=/home/web/venv/bin/python /home/web/www/manage.py runworker
directory=/home/web/www/
user=web
autostart=true
autorestart=true
redirect_stderr=true
stopasgroup=true
[program:server_interface]
command=/home/web/venv/bin/daphne -b 127.0.0.1 -p 8000 yourapp.asgi:channel_layer
directory=/home/web/www/
autostart=true
autorestart=true
stopasgroup=true
user=web

And finally, up near the top of the supervisord.conf file, beneath the [unix_http_server] heading, we need to make a few small changes to allow our web user to start and stop these processes.

[unix_http_server]
file=/var/run/supervisor/supervisor.sock
chmod=0770
chown=nobody:web

and restart the supervisord process.

You can also opt to add your web user to a group like wheel, and then alter the chown line to something like chown=nobody:wheel. The key here is simply to make the socket available to the appropriate user group so our web user can interact with it.

Setup Redis

Getting Redis up and ready is smooth and easy – in fact, at this point you’re already done. See how easy that was?

If you used the requirements.txt in part 1, you already have the needed asgi_redis package installed in your virtual environment, and the defaults that Redis installs with are fine for us. I do encourage you to take a look through the Redis docs, however, and configure it as you need.

Django Setup

If we were to check our supervisord status with supervisorctl status right now, we can expect it will be showing a FATAL error for our server_interface program entry. We need to add some new files to get daphne up and running, and while we’re in there we’ll make the changes necessary to get Django playing nicely with Postgres as our database and Redis as our channel backend.

For reference, much of the below can also be found in the Channels docs, with the addition of our postgres details.

Let’s start with PostgreSQL. Open up your settings.py file, and alter the DATABASES dict to the following:

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'your_postgres_db',
'USER': 'web',
'PASSWORD': 'your_postgres_pass',
'HOST': 'localhost',
'PORT': '', # Empty string == default (5432)
}
}

Now, add the following dict beneath that to setup our redis channel layer:

CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [("localhost", 6379)],
},
"ROUTING": "yourapp.routing.channel_routing",
},
}

By default, channels uses process memory as it’s communication channel, which is fine for simple development cases, but obviously can’t be shared between processes and is unsuitable for production.

Notice the "ROUTING" component of the above config? That’s a reference to the routing.py file we’re going to slap into the application directory – it should live next to your urls.py file.

Here’s an example:

from channels.routing import route
from channels.routing import include
from yourapp import consumers
http_routing = [
route('http.request', consumers.index)
]
stream_routing = [
]
channel_routing = [
include(stream_routing)
]

We’ll be fleshing this file out in Part 3, so stay tuned.
In the meantime, you can also add urls as normal to urls.py, referencing standard views, either in a views.py file, or in a directory – this is all exactly the same as it is in standard Django, without any need for changes to accommodate the new ASGI server.

Next to the views.py file in your application, add a new file named consumers.py. These consumers will be the callables handling the routing we’ll be setting up in routing.py – these are what will enable something like websocket requests and replies, rather than being tied into the standard Django view structure.

For now, we’ll put in a simple example of handling an http request, straight from the channels docs:

from django.http import HttpResponse
from channels.handler import AsgiHandler
def http_consumer(message):
# Make standard HTTP response - access ASGI path attribute directly
response = HttpResponse("Hello world! You asked for %s" % message.content['path'])
# Encode that response into message format (ASGI)
for chunk in AsgiHandler.encode_response(response):
message.reply_channel.send(chunk)

Finally, we need to create an asgi.py file – this should sit right next to the wsgi.py file that was automatically created for you by Django.

It’s pretty simple:

import os
from channels.asgi import get_channel_layer
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourapp.settings")
channel_layer = get_channel_layer()

This is the file we’re referencing in the supervisord config when we wrote command=/home/web/venv/bin/daphne -b 127.0.0.1 -p 8000 yourapp.asgi:channel_layer, so now that it’s in place, we have all the pieces in place.

Restart the supervisord service (service supervisord restart), and in a few seconds you should be able to check status (supervisorctl status) and see both of our program entries up and running!

Which is great! Give yourself a pat on the back. However, we have one step left. Right now, our interace server is bound to port 8000, which is fine for development, but we need to serve up requests on port 80 in production. Now, we could bind to port 80 instead, but that will require running Daphne with root permissions. Instead, let’s borrow the common approach with WSGI servers, and setup a reverse proxy with Nginx.

Last Step – Nginx Setup

Now, it doesn’t have to be Nginx – Apache could handle this duty, as could a number of others.

In the meantime, however, Nginx it is, so let’s take a look at the nicely straightforward setup.

Head to /etc/nginx/sites-available/ and create a new, appropriately named file (e.g. yourapp), and edit it to include the following:

# Enable upgrading of connection (and websocket proxying) depending on the
# presence of the upgrade field in the client request header
map \$http_upgrade \$connection_upgrade {
default upgrade;
'' close;
}
# Create an upstream alias to where we've set daphne to bind to
upstream yourapp {
server 127.0.0.1:8000;
}
server {
listen 80;
# If you have a domain name, this is where to add it
server_name localhost;
location / {
# Pass request to the upstream alias
proxy_pass http://yourapp;
# Require http version 1.1 to allow for upgrade requests
proxy_http_version 1.1;
# We want proxy_buffering off for proxying to websockets.
proxy_buffering off;
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if you use HTTPS:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client for the sake of redirects
proxy_set_header Host $http_host;
# We've set the Host header, so we don't need Nginx to muddle
# about with redirects
proxy_redirect off;
# Depending on the request value, set the Upgrade and
# connection headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
FreeBSD note:

Chances are good that your Nginx conf may be in a directory like /usr/local/etc/nginx – you’ll either need to alter it to support imports, or simply edit the file to include the above directives.

Now, check your config for sanity with nginx -t and, if all is well, restart the nginx service service nginx restart.

Now, push your changes to the server, and navigate to it in your browser. If is all well, you should see the simple message from the http.request consumer you set up.
What is this feeling. Is this… joy?

Next Time

Well, now, all set up with a stack that wouldn’t make a dev-op cry, not bad! It… doesn’t really do anything yet, though, does it? Next time, we’ll be looking at getting some simple websocket communication up and running.

Not a chat server, though. Everyone does a chat server. Instead, let’s try something a little more ambitious – how about real time media from an rtsp stream, straight into the browser?

If you’re running into any issues with the above, I suggest that a quick question to the good folks at Stack Overflow will likely avail you best; and as always, if you notice any mistakes in the article, or just have something to say, the comment section awaits.

+ more