I suspect the world would be better if that percentage were even greater.
On Serving Web Pages…
I must admit, I’ve been running WordPress for quite some time, and for the most part, I’m pretty happy with it as weblogs go. But there are a couple of things that bug me about it. For one thing, it isn’t really very simple. In fact, it’s not just that it isn’t simple, it’s that it’s complexity seems far outpaced by its utility. Allow me to explain: WordPress requires the installation of a bunch of fairly complicated software. It needs Apache (I have heard of it running with other webservers, but such a configuration isn’t well supported in my experience), PHP and MySQL. While the default installation is okay for modest blog installations like my own, to be able to support a decent load, there is the need to use caching plugins, perhaps offloading static content onto a different webserver, and generally a lot of performance tuning.
For instance, here is the output for running “siege”, which simulates a 15 simultaneous users trying to access my blog:
Transactions: 54 hits
Availability: 100.00 %
Elapsed time: 10.86 secs
Data transferred: 2.06 MB
Response time: 2.15 secs
Transaction rate: 4.97 trans/sec
Throughput: 0.19 MB/sec
Concurrency: 10.68
Successful transactions: 54
Failed transactions: 0
Longest transaction: 5.36
Shortest transaction: 1.53
10.68 transactions per second seem like it might be okay, but it’s no where near saturating the link speed. On the other hand, if you are serving mostly static content, you can use a server like thttpd. Here’s the output of a similar siege session simulating 200 simulatenous users:
Transactions: 4249 hits
Availability: 100.00 %
Elapsed time: 10.62 secs
Data transferred: 13.21 MB
Response time: 0.01 secs
Transaction rate: 400.09 trans/sec
Throughput: 1.24 MB/sec
Concurrency: 2.41
Successful transactions: 4249
Failed transactions: 0
Longest transaction: 0.27
Shortest transaction: 0.00
It’s running flat out: each page actually loads two files, so we get 400 transactions per second. The thttpd server haul’s major ass.
Okay, yes. The WordPress blog is running all sorts of dynamic code, but 99% of the time, it’s producing precisely the same static content. We aren’t really getting a huge payoff for our huge decrease in available throughput. We could serve literally hundreds of simultaneous users with even modest hardware if we made better use of static content.
It seems to me that there could be a nice weblogging system based upon this rather simple observation.
Comment from macalba
Time 12/3/2008 at 3:45 am
It might be interesting to repeat the test after implementing the wp-cache plug-in to convert to static pages?