Benchmarking with Siege
When a website has been finished and signed-off, we get to push it live. For any web development company this is always an exciting moment: it’s the point at which you can show off all the hard work you’ve put in over the previous weeks/months/years.
One difficulty, however, is that most development/test/staging environments only have a few visitors using the site at any given time. This is completely unrealistic for the real world! Yes, your website works fine in a staging environment, but how do you know your website will still be running smoothly when you have 100 users on it at the same time? Or, 1000 users, or a million users.
To test this, we can use benchmarking. You need to know what you’re looking to test, but as a broad-brush approach, benchmarking will give you an idea of how your web server will perform under stress: either by simulating multiple users, or simulating multiple requests (or both). There are a few different options on the (mostly open-source) market: over time I’ve used a fair few of these. The "industry standard" used to be Apachebench which is part of the apache2-utils package. This installs a few other utilities as well, which are mostly (but not all) not used unless you’re running Apache: most of our servers run NGINX with FPM packages these days. Rather than bloating the server with extra utilities, I settled on a different option: Siege.
I’ve been using Siege for benchmarking almost exclusively now for a couple of years, and the developer still brings out cool new features every few months. It has a feature set which (IMHO) is vastly superior to Apachebench and is still very lightweight.
There is a version of Siege in the official APT/YUM/Brew repositories, so installing it is very easy:
sudo apt-get install siege
sudo yum install siege
OSX (using Homebrew):
sudo brew install siege
There’s even a npm package if you’re more of a Node person.
The repositories, however, are often quite out-of-date, and we’re actually going want some features in the latest beta, so we’ll install from source. This should also install onto any Unix-based system (including OSX). The latest stable is at http://download.joedog.org/siege/siege-latest.tar.gz but we’ll use 4.0.3rc2.
cd /usr/local curl -o siege-latest.tar.gz http://download.joedog.org/siege/beta/siege-4.0.3rc2.tar.gz mkdir /usr/local/siege && tar -zxvf siege-4.0.3rc2.tar.gz -C /usr/local/siege --strip-components=1 rm siege-4.0.3rc2.tar.gz cd /usr/local/siege ./configure --with-ssl make make install
To confirm it’s installed correctly, we can see if the system detects the binary:
$ which siege /usr/local/bin/siege
Also, we can check the version with:
$ siege -V SIEGE 4.0.3rc2
Now to start testing
The syntax is very simple (see
man siege for full options). I’d recommend checking out
siege -C immediately after install, as this will give you the based defaults. Not also the ‘resource file’ in the output, which shows your default configuration file.
To run a test using all the details, you can just run:
$ siege https://www.bigbluedoor.net
This will launch a test simulating 25 concurrent users on the site, and will keep going until you cancel it. But we want to see a few more specifics.
For example, we’re initially just going to use two command line options:
-r (number of repetitions) and
-c (concurrent users).
Let’s run the following, which simulates a single user requesting our homepage one thousand times:
siege -r1000 -c1 https://www.bigbluedoor.net
At the bottom of the output, you’ll see something similar to the following:
Transactions: 16000 hits Availability: 100.00 % Elapsed time: 402.92 secs Data transferred: 76.07 MB Response time: 0.01 secs Transaction rate: 39.71 trans/sec Throughput: 0.19 MB/sec Concurrency: 0.37 Successful transactions: 16000 Failed transactions: 0 Longest transaction: 0.18 Shortest transaction: 0.00
--no-parse if you want to revert to the basic behaviour.
Other key metrics here are the Response Time (0.01 seconds) – on average our server is returning a response within 0.01 of a second. This is extremely fast and testament to the power of Varnish, which caches all our resources for super-fast delivery (I’m also calling this test from a different machine on the same vLAN). You’ll also see that the throughput (0.19MB/sec) and transaction rate (39.71/sec) are both fairly low. Nothing out of the ordinary here: but it’s an extremely unrealistic test: you’re probably never going to have a single user refreshing the page 1000 times. So, this is where concurrency comes in. For each subsequent test, I’ll adjust the number of users and repetitions so we have the same overall number of transactions.
$ siege -r100 -c10 https://www.bigbluedoor.net Transactions: 16000 hits Availability: 100.00 % Elapsed time: 58.17 secs Data transferred: 76.07 MB Response time: 0.02 secs Transaction rate: 275.06 trans/sec Throughput: 1.31 MB/sec Concurrency: 5.37 Successful transactions: 16000 Failed transactions: 0 Longest transaction: 0.12 Shortest transaction: 0.00
This shows the results from ten users requesting 1000 pages each: the transaction and throughput rate are both far higher, which is good: this shows the server is still able to transfer. We also see that the response time has increased though: this is exactly what we’re trying to test: how quickly does the server start to fall over if you’re pounding it with requests.
$ siege -r10 -c100 https://www.bigbluedoor.net Transactions: 16000 hits Availability: 100.00 % Elapsed time: 14.40 secs Data transferred: 76.07 MB Response time: 0.07 secs Transaction rate: 1111.11 trans/sec Throughput: 5.28 MB/sec Concurrency: 73.15 Successful transactions: 16000 Failed transactions: 0 Longest transaction: 0.53 Shortest transaction: 0.01 $ siege -r4 -c250 https://www.bigbluedoor.net Transactions: 16000 hits Availability: 100.00 % Elapsed time: 12.05 secs Data transferred: 76.07 MB Response time: 0.13 secs Transaction rate: 1327.80 trans/sec Throughput: 6.31 MB/sec Concurrency: 168.12 Successful transactions: 16000 Failed transactions: 0 Longest transaction: 1.24 Shortest transaction: 0.00
These two tests are a lot more realistic: one hundred users requsting ten pages each, and two-hundred and fifty users requesting four pages each. We can see from these tests that the server is easily able to manage this level of requests, which is excellent. At the same time, running a tool such as
top on the destination server(s) show virtually no impact on the Load Average: the server is not struggling to serve the requests.
But what about logged-in users?
Up until now, however, we’ve just been testing the infrastructure. Varnish caches all of our pages and resources for anonymous users. This is great, as it means that for all of the tests we’ve performed so far, the PHP application (in this case Drupal) has only been accessed once – every other time it’s been a cached version. Whilst this is great for performance and great for our application, this doesn’t necessarily simulate the real world, where users can have accounts and be accessing non-cached pages.
So, we need to know how our application will stand up to that also.
Fortunately, Siege allows us to pass through cookies, which will disable Varnish for us in the same way as it would for an end user. Let’s try the following:
siege -r1 -c1 -g https://www.bigbluedoor.net
Note I’ve added the -g flag here which shows us the headers. From this command you will see something like the following:
HTTP/1.1 200 OK Server: nginx Date: Sat, 05 Nov 2016 16:25:03 GMT Content-Type: text/html; charset=utf-8 Content-Length: 5970 ...etc
In the full output, you may see a X-Varnish tag, which shows us that Varnish is processing this header. This should show two numbers if it’s caching correctly, or one number if it’s passing/piping to the backend. We need to pass a cookie such that it disables Varnish: something like the following works with my VCL configurations, though this will depend on which cookies you’re stripping in the vcl_recv.
siege -r1 -c1 --header="Cookie: NO_CACHE=1" -g https://www.bigbluedoor.net
With this extra, I can now see just the one number for the X-Varnish header, so I know that this page is being served from the PHP application.
So, now to stress-test the application, rather than the server/Varnish.
$ siege -r4 -c250 --header="Cookie: NO_CACHE=1" https://www.bigbluedoor.net Transactions: 16000 hits Availability: 100.00 % Elapsed time: 83.18 secs Data transferred: 76.07 MB Response time: 1.14 secs Transaction rate: 194.00 trans/sec Throughput: 0.92 MB/sec Concurrency: 227.34 Successful transactions: 16000 Failed transactions: 0 Longest transaction: 25.86 Shortest transaction: 0.00
This is understandably a lot slower: our web application is now serving each page, and our web server is having to provide the images. Crucially though, our Drupal application is still able to serve requests at 1.14 seconds average load time (which is well within expected limits) even when 250 authenticated users are visiting the site simultaneously.
siege -r4 -c250 --no-parse --header="Cookie: NO_CACHE=1" https://www.bigbluedoor.net
Siege goes a few stages better than this, even. Rarely would you have multiple users all visiting the homepage, and they wouldn’t on average request a page more than once every few seconds. You can pass in a list of URLs (in a text file) to visit at random, and pass in an “average reading time” for users. Note the -f (file) and -d (delay) flags.
siege -r4 -c250 -d10 --no-parse --header="Cookie: NO_CACHE=1" -f list-of-urls.txt
Top tip: if you have a sitemap.xml file, use this to generate a list of all URLs on your site.
This, now, is a pretty realistic test of how the application will actually run in production. If I can run all of these tests without any minimal impact on server performance and on load times for front-end users, I’ll be confident that the application will run as expected with the anticipated level of user interaction.
It’s worth noting, as well, that we’d not normally do this sort of testing on a live environment. Our staging environments are cloned from production, so we can run the same tests without impacting the production website. Our own staging servers are protected with Basic authentication (a popup for username and password) to protect from Search Engines - though there are IP restrictions in place as well. You can also pass this through as a header with Siege:
siege -r1 -c1 --header="Authorization:Basic $(echo –n user:pass | openssl base64)" https://stage.bigbluedoor.net
In future blog posts, I’ll talk through how to use Siege to benchmark other metrics, for example to compare how your PHP application runs on different PHP versions, or testing different caching strategies for best front-end performance. All the tests here produce raw data, which although useful is not always sufficient for a client’s concerns: in a future post I'll detail how we can use Bombard to start graphing some of these metrics for easier visual analysis, or onto a dashboard.
We can also use Siege in our automated testing tools to compare page load times before and after a deploy to staging. If, after a deploy, the page load times are vastly increased then the Build processs alerts the sysadmin team so we can investigate. This keeps the web server and web application running as smoothly as possible, with minimal risks to the production environment.
Siege is an incredibly powerful tool, and a key part of our sysadmin arsenal.