Scaling Django - Python Django Tutorials

Scaling Django

Now that you know how to get Django running on a single server, let’s look at how you can scale out a Django installation. This section walks through how a site might scale from a single server to a large-scale cluster that could serve millions of hits an hour. It’s important to note, however, that nearly every large site is large in different ways, so scaling is anything but a one-size-fits-all operation.

The following coverage should suffice to show the general principle, and whenever possible we’ll try to point out where different choices could be made. First off, we’ll make a pretty big assumption and exclusively talk about scaling under Apache and mod_python. Though we know of a number of successful medium- to large-scale FastCGI deployments, we’re much more familiar with Apache.

Running on a Single Server

Most sites start out running on a single server, with an architecture that looks something like Figure 13-1. However, as traffic increases you’ll quickly run into resource contention between the different pieces of software.

Database servers and Web servers love to have the entire server to themselves, so when run on the same server they often end up fighting over the same resources (RAM, CPU) that they’d prefer to monopolize. This is solved easily by moving the database server to a second machine.

Deploying Django - a single server Django setup.
Figure 13-1: a single server Django setup.

Separating Out the Database Server

As far as Django is concerned, the process of separating out the database server is extremely easy: you’ll simply need to change the DATABASE_HOST setting to the IP or DNS name of your database server. It’s probably a good idea to use the IP if at all possible, as relying on DNS for the connection between your Web server and database server isn’t recommended. With a separate database server,
our architecture now looks like Figure 13-2.

Here we’re starting to move into what’s usually called n-tier architecture. Don’t be scared by the buzzword – it just refers to the fact that different tiers of the Web stack get separated out onto different physical machines.

At this point, if you anticipate ever needing to grow beyond a single database server, it’s probably a good idea to start thinking about connection pooling and/or database replication. Unfortunately, there’s not nearly enough space to do those topics justice in this book, so you’ll need to consult your database’s documentation and/or community for more information.

Deploying Django - Moving the database onto a dedicated server.
Figure 13-2: Moving the database onto a dedicated server.

Running A Separate Media Server

We still have a big problem left over from the single-server setup: the serving of media from the same box that handles dynamic content. Those two activities perform best under different circumstances, and by smashing them together on the same box you end up with neither performing particularly well.

So the next step is to separate out the media – that is, anything not generated by a Django view – onto a dedicated server (see Figure 13-3).

Ideally, this media server should run a stripped-down Web server optimized for static media delivery. Nginx is the preferred option here, although lighttpd is another option, or a heavily stripped down Apache could work too. For sites heavy in static content (photos, videos, etc.), moving to a separate media server is doubly important and should likely be the first step in scaling up.

This step can be slightly tricky, however. If your application involves file uploads, Django needs to be able to write uploaded media to the media server. If media lives on another server, you’ll need to arrange a way for that write to happen across the network.

Deploying Django - Separating out the media server.
Figure 13-3: Separating out the media server.

Implementing Load Balancing and Redundancy

At this point, we’ve broken things down as much as possible. This three-server setup should handle a very large amount of traffic – we served around 10 million hits a day from an architecture of this sort – so if you grow further, you’ll need to start adding redundancy.

This is a good thing, actually. One glance at Figure 13-3 shows you that if even a single one of your three servers fails, you’ll bring down your entire site. So as you add redundant servers, not only do you increase capacity, but you also increase reliability. For the sake of this example, let’s assume that the Web server hits capacity first.

It’s relatively easy to get multiple copies of a Django site running on different hardware – just copy all the code onto multiple machines, and start Apache on all of them. However, you’ll need another piece of software to distribute traffic over your multiple servers: a load balancer.

You can buy expensive and proprietary hardware load balancers, but there are a few high-quality open source software load balancers out there. Apache’s mod_proxy is one option, but we’ve found Perlbal to be fantastic. It’s a load balancer and reverse proxy written by the same folks who wrote memcached (see Chapter 16).

With the Web servers now clustered, our evolving architecture starts to look more complex, as shown in Figure 13-4.

Deploying Django - A load-balanced, redundant server setup.
Figure 13-4: A load-balanced, redundant server setup.

Notice that in the diagram the Web servers are referred to as a cluster to indicate that the number of servers is basically variable. Once you have a load balancer out front, you can easily add and remove back-end Web servers without a second of downtime.

Going Big

At this point, the next few steps are pretty much derivatives of the last one:

  • As you need more database performance, you might want to add replicated database servers. MySQL includes built-in replication; PostgreSQL users should look into Slony and pgpool for replication and connection pooling, respectively.
  • If the single load balancer isn’t enough, you can add more load balancer machines out front and distribute among them using round-robin DNS.
  • If a single media server doesn’t suffice, you can add more media servers and distribute the load with your load-balancing cluster.
  • If you need more cache storage, you can add dedicated cache servers.
  • At any stage, if a cluster isn’t performing well, you can add more servers to the cluster.
    After a few of these iterations, a large-scale architecture may look like Figure 13-5.
Deploying Django - An example large-scale Django setup.
Figure 13-5: An example large-scale Django setup.

Though we’ve shown only two or three servers at each level, there’s no fundamental limit to how many you can add.

Performance Tuning

If you have huge amount of money, you can just keep throwing hardware at scaling problems. For the rest of us, though, performance tuning is a must.

Unfortunately, performance tuning is much more of an art than a science, and it is even more difficult to write about than scaling. If you’re serious about deploying a large-scale Django application, you should spend a great deal of time learning how to tune each piece of your stack.

The following sections, though, present a few Django-specific tuning tips we’ve discovered over the years.

There’s No Such Thing as Too Much RAM

Even the really expensive RAM is relatively affordable these days. Buy as much RAM as you can possibly afford, and then buy a little bit more. Faster processors won’t improve performance all that much;
most Web servers spend up to 90% of their time waiting on disk I/O. As soon as you start swapping, performance will just die. Faster disks might help slightly, but they’re much more expensive than RAM, such that it doesn’t really matter.

If you have multiple servers, the first place to put your RAM is in the database server. If you can afford it, get enough RAM to fit your entire database into memory. This shouldn’t be too hard; we’ve developed a site with more than half a million newspaper articles, and it took under 2GB of space.

Next, max out the RAM on your Web server. The ideal situation is one where neither server swaps – ever. If you get to that point, you should be able to withstand most normal traffic.

Turn Off Keep-Alive

Keep-Alive is a feature of HTTP that allows multiple HTTP requests to be served over a single TCP connection, avoiding the TCP setup/teardown overhead. This looks good at first glance, but it can kill the performance of a Django site. If you’re properly serving media from a separate server, each user browsing your site will only request a page from your Django server every ten seconds or so. This leaves HTTP servers waiting around for the next keep-alive request, and an idle HTTP server just consumes RAM that an active one should be using.

Use Memcached

Although Django supports a number of different cache back-ends, none of them even come close to being as fast as Memcached. If you have a high-traffic site, don’t even bother with the other backends – go straight to Memcached.

Use Memcached Often

Of course, selecting memcached does you no good if you don’t actually use it. Chapter 16 is your best friend here: learn how to use Django’s cache framework, and use it everywhere possible.
Aggressive, pre-emptive caching is usually the only thing that will keep a site up under major traffic.

Join the Conversation

Each piece of the Django stack – from Linux to Apache to PostgreSQL or MySQL – has an awesome community behind it. If you really want to get that last 1% out of your servers, join the open source communities behind your software and ask for help. Most free-software community members will be happy to help. And also be sure to join the Django community – an incredibly active, growing group of Django developers. Our community has a huge amount of collective experience to offer.

What’s Next?

The remaining chapters focus on other Django features that you may or may not need, depending on your application. Feel free to read them in any order you choose.