Apache's Website Benchmarking Tool

Apache calls their basic stress-testing tool the "Apache HTTP server benchmarking tool," but you're likely to end up calling it ab because that's the command line tool name. The most obvious feature of this product is that it's entirely free: you can happily generate thousands of requests and hammer a web server. Several other nice features: the server in question doesn't have to be Apache (I've run it against both Apache and Nginx, and others online are using it against IIS), it's incredibly simple (as opposed to installing a complex framework like Apache JMeter), it comes pre-installed on Macs, and on Linux you don't even have to install Apache.

The entirety of Apache has been installed on Mac OS X for several years now (still true on the current release, "Sierra"), and ab is a part of that package. With the important caveat that it's not on the path: you can run it by typing /usr/sbin/ab. On Debian Jessie, install the apache2-utils package, which will put the tool on your path. Likewise, on Fedora 25 install httpd-tools.

Common usage would be like this:

ab -c 5 -n 100 https://yourserver.example.com/
  • '-c' is concurrency, how many requests to run at once
  • '-n' is number of requests to perform

I almost immediately ran into the error apr_pollset_poll: The timeout specified has expired (70007). The default time-out is set to 30 seconds, but due to reasons I won't go into (and am kind of embarrassed about) we occasionally hit that limit. So you add -s 60 to increase the time-out to 60 seconds.

Example Run

$ ab -c 10 -n 1000 http://localhost/
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking toshi7 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1
Server Hostname:        localhost
Server Port:            80

Document Path:          /
Document Length:        5863 bytes

Concurrency Level:      10
Time taken for tests:   0.120 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      6055000 bytes
HTML transferred:       5863000 bytes
Requests per second:    8308.41 [#/sec] (mean)
Time per request:       1.204 [ms] (mean)
Time per request:       0.120 [ms] (mean, across all concurrent requests)
Transfer rate:          49128.33 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    1   0.3      1       2
Waiting:        0    1   0.3      1       2
Total:          1    1   0.3      1       2

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      2
  95%      2
  98%      2
  99%      2
 100%      2 (longest request)

This is run against my local server. The machine isn't super-fast, but it's static content and a small page. I thought it would be fast, but ... wow. I'm impressed.

$ ab -c 10 -n 1000 http://volumio.local/
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking volumio.local (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:
Server Hostname:        volumio.local
Server Port:            80

Document Path:          /
Document Length:        2538 bytes

Concurrency Level:      10
Time taken for tests:   1.078 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      2850000 bytes
HTML transferred:       2538000 bytes
Requests per second:    927.94 [#/sec] (mean)
Time per request:       10.777 [ms] (mean)
Time per request:       1.078 [ms] (mean, across all concurrent requests)
Transfer rate:          2582.64 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     4   11   2.0     10      23
Waiting:        3   10   1.9     10      17
Total:          4   11   2.0     10      24

Percentage of the requests served within a certain time (ms)
  50%     10
  66%     11
  75%     12
  80%     12
  90%     14
  95%     15
  98%     16
  99%     17
 100%     24 (longest request)

This is against Volumio (running not on a Raspberry Pi as that blog post suggests, but on a box with an Intel i5 processor and 16G of RAM). Here's an interesting comparison: up the concurrency but same number of requests, and it responds pretty well, but much more and it would be slowing down:

$ ab -c 100 -n 1000 http://volumio.local/
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking volumio.local (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:
Server Hostname:        volumio.local
Server Port:            80

Document Path:          /
Document Length:        2538 bytes

Concurrency Level:      100
Time taken for tests:   0.970 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      2850000 bytes
HTML transferred:       2538000 bytes
Requests per second:    1031.14 [#/sec] (mean)
Time per request:       96.980 [ms] (mean)
Time per request:       0.970 [ms] (mean, across all concurrent requests)
Transfer rate:          2869.86 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.5      0       3
Processing:    35   94  13.2     95     169
Waiting:       35   92  10.8     95     112
Total:         38   94  13.0     95     169

Percentage of the requests served within a certain time (ms)
  50%     95
  66%     98
  75%     99
  80%     99
  90%    101
  95%    103
  98%    124
  99%    144
 100%    169 (longest request)

Gripes

It seems to me you should be able to feed it a list of URLs (from a file or command line, I don't care) for it to cycle through rather than just hitting the same one over and over ... And it would be nice if you could say "run for 20 minutes," but this isn't a big deal because either method (this or the one it uses) results in a good requests-per-second count.

But this is squawking over a free, and even more importantly, easy-to-use tool for quick testing. This is a good one in a landscape rife with expensive services that will barely tell you more than this does.

Bibliography