Throughout the lifetime of Kernl I’ve always kept an eye out for ways to reduce costs, which means I’ve constantly got an eye on Digital Ocean‘s droplet pricing to see if there are any cost efficiencies to be had. Back in January of 2018 Digital Ocean released what they called “flexible” plans. These plans are 3 droplet types that all cost $15/month, but have 3 different configurations:
3GB RAM, 1 vCPU
2GB RAM, 2 vCPUs
1GB RAM, 3 vCPUs
I recently re-discovered these plans and thought to myself “Jack, you should load test these configurations and see which is better for WordPress!”. So here we are, attempting to answer the age-old question “Is RAM or CPU more important for WordPress performance?”.
As with the $5 VPS shootout, I used the setup guide posted here to get a consistent setup across the different servers. I ended up with the following software versions:
PHP FPM 7.2
Ubuntu 18.04 LTS
The WordPress server location was in Digital Ocean’s London (lon1) data center and the load generators were located in Digital Ocean’s New York City 3 (nyc3) data center.
No caching was used for this test because I wanted to test WordPress performance, not caching plugin performance.
The test WordPress site was populated with a WP export dump of this blog. That means that the test skewed extremely read-heavy. Your results may be different for a site that is write-heavy, but experience tells me that this probably isn’t true given how chatty WordPress is with MySQL in most situations.
Test A: 3GB RAM, 1 vCPU
The first test that I ran against the $15 droplet was with the 3GB RAM and 1 vCPU configuration. I had expected this to perform the worst out of any of the configurations due to WordPress generally being CPU bound when it comes to performance.
As you can see things went OK until we hit around 18 requests per second, and then everything went off the rails. This isn’t too surprising as I didn’t have any caching plugins installed. This is just raw WordPress with MySQL (MariaDB). I ended up killing the load test after 7 minutes because it was clear what the results were.
The response time distribution was also pretty terrible with a 99th percentile of slightly over 5 seconds.
Test B: 2GB RAM, 2 vCPUs
Next up was the 2GB of RAM with 2 vCPUs $15 configuration. I had actually thought that this configuration would perform the best due to the nice balance of RAM and CPUs. It performed better than the first test, but it wasn’t the best.
As you can see we had far fewer errors this time and were able to reach 60 requests per second. For those doing the math that is 3.3x better performance than a single vCPU (for the same price!).
The response time distribution was starting to look better with 2 vCPUs as well. The 99th percentile was ~2.7 seconds which isn’t too bad while serving 60 requests per second (5.1 million requests per day).
Test C: 1GB RAM, 3 vCPUs
Finally I tested the 1GB RAM, 3 vCPUs $15 configuration. Personally I thought that this would perform 2nd best because I figured the server would become RAM starved while trying to server more requests. I was wrong.
So for the same price as the 3GB RAM, 1 vCPU server we can get 6.5x the performance by simply choosing the configuration with more CPU availability. We peaked at around 118 requests per second (10.1 million requests per day) and then it started to trail off as we started to see errors.
The response time distribution was pretty excellent here and started to look like what we would see from high performance servers (1 big outlier with generally good response times across the rest of the spectrum). With the 99th percentile at 700ms it seems like a no-brainer to go with more vCPUs.
It seems obvious from the data presented above that you are always better off going with more vCPUs than RAM, but I hesitate to always recommend that because reality is more complicated. There are likely situations where a good balance of RAM and vCPUs would perform better (lots of image uploads and processing for example).
So what does this all mean? In most cases you are probably better off with more vCPUs instead of more RAM. However, I strongly recommend that you do some load testing to make sure that this is true for your WordPress site.
In the world of affordable WordPress hosting there is an array of different VPS providers to choose from. With so many choices how do you know who to choose? In addition to criteria such as ease of use and support, performance is a huge concern for most people deploying WordPress. In this article we’ll take a look at the performance of several different VPS providers in the $5 tier to see how they perform under load by using Kern’s WordPress Load Testing feature.
Who are we testing?
There are a lot of VPS providers out there providing machines in the $5 / month tier, so we’ve chosen 7 of the more popular providers to test against:
No Cache – 200 concurrent users for 30 minutes. We used this test to see raw WordPress performance with no caching enabled.
Cached with W3 Total Cache + Memcached – 200 concurrent users for 30 minutes. We used this test to see what a more real-world scenario looks like. In general most people use some form of caching on their site.
To make sure that our test setup was consistent across all VPS providers we followed this setup guide, where we ended up with the following versions of software:
With regards to regions, for every test we kept the VPS instances on the east coast of the United States with the exception of Hetzer where we had the VPS instance in Germany.
Results (Request & Failures)
First, let’s take a look at the request / second results across the different providers.
As you can see there is a wide spread of results depending on host. Honestly this wasn’t want I expected when I started. You’ll also notice that the Azure box cost $15/month. It was the closest I could get to finding a $5/month box in their interface (which I felt like I needed another degree in Computer Science to understand!).
So let’s visualize the data with no caching enabled.
We get lots of interesting results here. If you run a site where caching is difficult to do, your $5 will go much further depending on your host. Some notes:
Google Cloud and Azure performed TERRIBLE. I’m not sure why. Maybe it had to do with accessing the disk so frequently to load up PHP files? (but I expect that those were cached by PHP FPM or some other underlying process).
If you are in Europe, Hetzner is your friend. $5/month gets you 4x the ram and 2x the vCPUs are the next closest provider.
If you are in the US, AWS seems to be winning in this test but not by much. It feels like you would be fine going with Digital Ocean or Vultr.
Before making any real decisions on a host, I’d want to run 5 or so tests across different instances to make sure that there isn’t a lot of variance in my results. Noisy neighbors can often be a problem on VPS providers.
Now let’s take a look at a more realistic scenario where you have a caching plugin installed.
You’ll notice that there aren’t any error bars on this graph. Thats because each host was able to handle the load without having any. This isn’t too surprising since most of the requests would be served right out of memory via Memcached. Some notes on cached requests:
Once again, Google Cloud and Azure perform the worst out of any of these hosts. Given how highly regarded they are in the hosting ecosystem outside of WordPress I expected better performance.
All of the other providers posted impressive numbers (170 req/s – 180 req/s). At level I would probably choose whatever provider had the best support, user interface, and reliability.
I suspect that most of these boxes could handle a little bit more load before going under. If I increased this test by an order of magnitude (200 users to 2000 users) I think most of the providers would tap out before Hetzner does due to how much more RAM and CPU it has.
Results (Response Time Distribution)
While requests per second and failures per second are valuable metrics, in order to get a more holistic view of raw performance for these $5 VPS instances we need to look at response time distribution.
So how should you interpret this chart? For the percentage columns, each value is in milliseconds. If you look at the 99% column, you can see that 99% of Vultr requests returned in <= 3600ms. If you look at the 80% column for Vultr, you can see that 80% of Vultr requests returned in <= 3400ms. Let’s take a look at our un-cached results.
Some notes on this graph:
Any service that hit the 5000ms mark killed the request. I think that they likely would have gone far beyond that.
Once again, for $5 Hetzner is just crushing it. You simple can’t compete with 4GB RAM and 2vCPUs. Even with no caching their 99th percentile was under 2 seconds!
For our non-european readers, Digital Ocean, Vultr, and AWS seemed to perform the best. AWS remained remarkably consistent across the response time distribution range. This is a good thing.
Google Cloud… wtf? So 99% of your requests finished in <= 5 seconds, and then 90% of requests finished in <= 300ms? Something is fishy.
And now, let’s see how things change when caching is used.
As expected, most providers do very well when caching is used. Digital Ocean, Linode, Vultr, AWS, and Hetzner are all performing in the <= 200ms range (some lower!). It’s hard to decide who is better at this level due to latency due to geographic distance. The point is that you could choose any of those hosts and be OK when using a cache plugin. Once again, I’m struggling to figure out why someone would spend their $5 on Azure or Google Cloud.
Results (Total Requests)
Our final metric we need to look at before we can pass any judgement on our VPS providers is total requests. This in particular needs to be compared with response time distribution.
This graph does a great job explaining some of the discrepancies in the response time distribution data. When caching wasn’t enabled, Google Cloud and Azure barely even show up on the graph. More thoughts:
I’m 90% positive that the Google Cloud and Azure instances nearly stopped processing requests at some point. They were so overwhelmed that they just fell over. The data seems to support this.
Google Cloud and Azure are not the best place to spend $5. Even with caching I would be scared if there was ever a cache-miss.
Hetzner is the clear winner in un-cached data. Once again, this makes sense how much more machine you are getting for your money.
On the U.S. side of the ocean, AWS seems to win here, although not by much over Digital Ocean and Vultr. Once caching is taken into consideration they all perform roughly the same (accounting for latency between data centers and load generators).
If performance is all you care about for your $5 (and thats a big if), then choose Hetzner if you need a VPS in Europe. If you need a VPS in the US or elsewhere, choose AWS, Digital Ocean, or Vultr. Microsoft and Google are not great for $5.
February was a pretty busy month for Kernl! We had a lot of great tweaks to load testing, a few customer feature improvements, and some infrastructure work. Lets get started!
Features & Bugs
Multi-Region Load Tests – You can now select multiple regions for your WordPress load tests! Instead of having all traffic come from a single region you can have it evenly distributed across all the available regions. This is useful for testing if you have a global audience.
Load Testing Enters General Availability – Kernl’s WordPress load testing is now available for all customers.
Delete Load Tests – You are now able to delete your load tests.
License Max Version Bug – A customer brought to our attention that the “max_version” field behavior wasn’t quite right. This has been resolved.
Customer Card Expiration Cron Job Bug – We recently discovered that the cron job that checks to see if a customer has paid their invoice was broken. This was going on for about 5 months, so some of you may have received you Kernl subscription for free during that time period. 😉
Multiple License Domains – If you use our license management system and restrict via domains, you can now enter multiple domains on a per-license basis. This is useful if you want to use the same license for local, staging, and production.
License Management UI Updates – We’ve simplified the list view in license management by removing some columns that were cluttering the screen. We’ve also lined up the action buttons better and will now notify you in the plugin/theme detail pages if you have license management enabled but no licenses associated with your product.
The Kernl Analytics server was re-sized to be smaller. It was way over allocated.
Load testing was moved to a Kernl sub-domain. Prior to this it had a top-level domain.
Load testing servers that don’t come up after 3 minutes are removed from the load testing pool.
Session handling (for OAuth) has been moved to cookies. Prior to this we stored sessions in Redis.
We have removed our dependency on the ‘Q’ promise package on the Node.js app servers.
The Digital Ocean 1-click LEMP install is already configured to take full advantage of as many cores and as much RAM as is available. During the course of these tests I never needed to tweak any server settings. That doesn’t mean that more performance couldn’t have been extracted through configuration tweaks, but even with these settings we were able to gain useful information about WordPress performance on the different Digital Ocean droplet levels.
With regards to WordPress configuration… there isn’t any. No caching, CDNs, or compression. Just raw WordPress. Its easy to get high performance out of WordPress if you cache everything, but seeing what performance looks like in the worst-case scenario is far more interesting.
For this series of tests I imported the contents of my personal blog (Re-cycledair) to the target and then used Kernl to run the load tests. Due to the nature of my blog this test is very read heavy. While this isn’t realistic for everyone, for my personal blog the test is representative of the actual traffic that it receives.
The test itself looked like:
Concurrent users – 500
Ramp up – 2 users per second
Duration – 30 minutes
Request Rate – Each user makes 1 request every second
Droplet location – New York City
Load test generator location – Amsterdam
As expected the amount of traffic that we could handle scaled linearly with the amount of hardware we were using (and cost).
One interesting thing you’ll see in the graph is that in two places the requests per second actually trend down.
Looking at the data above you can see that I tried Digital Ocean’s standard droplets as well as their CPU optimized droplets. The difference between them is cost and dedicated hyper-threads. From a cost perspective, you’re better off going with the standard droplet in the same price category. Personally I expected the CPU optimized droplet to perform better, but this might not be the best type of workload for it.
What About Caching?
Just for fun I tried out the one-click install of WordPress with LiteSpeed configured on a $5 / month droplet.
Things went really really really well.
Thats right. By the time the test was completed my little $5 / month droplet was receiving 1800 requests / second WITH NO ERRORS. For perspective, thats 4.6 billion requests per month.
The data does a good job speaking for itself here, but in general you should definitely stick to Digital Ocean’s standard droplets when running WordPress. Even without caching you can get really good performance out of the $40/month droplet.
There are a lot of different reasons to load test.
Infrastructure and Hosting – Kernl WordPress load testing gives you confidence that you are making the right decisions with your infrastructure and hosting. Looking to change hosts but aren’t sure how big or expensive of a plan you need? Run a load test.
Performance Testing – Load testing gives you confidence that the SQL query you just wrote isn’t going to collapse your website under load.
Confidence with your clients – Load testing lets you tell your clients with confidence that their new website can handle 100,000 visitors a day without any degradation in response time.
How Does It Work?
Kernl’s WordPress Load Testing solution makes load testing your WordPress site a breeze. You only need verify your ownership of the site with an easy to use WordPress plugin and then start testing. No coding or infrastructure management needed.
How Much Does it Cost?
Kernl’s WordPress Load Testing is included with your Kernl subscription. Usage per plan is as follows:
Enterprise & Above
If your needs don’t fit neatly into one of these categories feel free to reach out. We’d be happy to create a custom plan to suit your needs.
Welcome to the new year! January was a great month for Kernl with lots of great new features, tweaks, and bug fixes to make your experience even better. Lets dive in.
Plugin & Theme Tile View – The original Kernl list views for both plugins and themes was a table. Over time this table became difficult to understand and didn’t convey a lot of information. The new tile views shows more information about your plugin and theme while being a lot friendlier to new users.
Code Widget – On the plugin and theme detail pages there is not a widget at the top that shows you the code you need to integrate Kernl with your product. This is part of a broader plan to make Kernl more friendly to first-time users.
Version Table Actions – The plugin/theme version table had 5 different action buttons on it. This was super overwhelming to people so it has been collapsed into a drop-down menu instead.
http://status.kernl.us – We now have a full-featured status page powered by Pingdom. You can use this site to check the health of our service.
GitLab Oauth – GitLab authentication used to be powered by authentication tokens that you generated on GitLab and then copied into our system. You can now Oauth with GitLab which is a much easier flow for customers to manage.
Feature Flag Wizard – Feature flags can be a little daunting if you aren’t already familiar with them. To make it easier for customers to get started with we created a feature flag wizard. If you don’t already have any feature flags I encourage you to check it out.
Load Test Site Verification – You are now required to verify site ownership before running load tests. This is accomplished via a simple WordPress plugin. WordPress load testing is still in closed beta, but please reply to this email if you would like to take part in testing.
Minor Features & Bug Fixes
Unsubscribe links were broken when we whitelisted our domain with SendGrid. This has been resolved.
Our application servers were upgraded to Node.js 10.15.0.
Available RAM was increased from 1GB to 2GB on our application servers.
A loading spinner has been added when plugins and themes are loading.
Load testing response time distribution is now blue instead of grey.
To horizontally scale any app is an exercise in breaking things apart as much as possible. In the case of WordPress there are a few shared components that I wanted to break up:
File System – The file system is the most problematic part of scaling WordPress. Unless you change how WordPress stores plugins, themes, media, and other things you need to have a shared file system that all nodes in your cluster can access. There are likely some other solutions here, but this one provides a lot of flexibility.
MySQL – In many WordPress installs MySQL lives on the same machine as WordPress. For a horizontally scaled cluster this doesn’t work so we need a MySQL that is external.
Memcached – It was brought to my attention that during part 1 of this series using WP Super Cache to generate static pages was sort of cheating. In the spirit of making this harder for myself I introduced W3 Total Cache instead and will be using an external Memcached instance as the shared cache.
Now that the basic why and what is out of the way lets talk about the how. I’m a huge fan of Digital Ocean. I use them for everything except file storage so I’m going to use them for this WordPress cluster as well. Here’s how its going down:
Create a droplet that will act as the file system for our cluster. Using NFS all droplets in the cluster will be able to mount it and use it for WordPress. I’m also going to use this for Memcached since NFS doesn’t take up many resources.
Create a base droplet that has Nginx and PHP7.2-FPM installed on it. There is a little bit of boilerplate configuration here, but in general the install is typical. The only change to the Nginx configuration where I set the root directory to be the NFS mount. Use this base droplet to configure WordPress database settings.
Use Compose.io create a MySQL database. I wanted something that was configured well that I didn’t have to think about. Totally worth the $27 / month.
Once the above are done take a snapshot of the base droplet and use it to create more droplets. If all goes well you shouldn’t need to do any configuration.
Using Digital Ocean’s load balancer service add your droplets to the load balancer.
Voila! Thats it.
No Cache Smoke Test
200 users, 10 minutes, 2 users/sec ramp up, from London
As with every load test that I do, the first test is always just to shake out any bugs in the load test itself. For this test I didn’t have any caching enabled and only a single app server behind the load balancer. It was effectively the same as the first load test I did during part 1 of this blog series.
As you can see from the graph below, performance was what we would expect from the setup that I used. We settled in to 21 requests / second with no errors.
The response time distribution wasn’t very great. 90% of requests finished in under 5 seconds, but thats still a very long time. Generally if I saw this response time distribution I would think that its time to add caching or scale up/out.
So. Many. Failures.
2000 users, 120 minutes, 2 users/sec ramp up, from London
The next test I decided to run was the sustained heavy load test. This is generally where I start to see failures from managed WordPress hosting providers. Given that I didn’t add any more app servers to the load balancer and had no caching things went as poorly as you would expect.
Everything was fine up until ~25 req/s and then the wheels fell off. The response time distribution was bad too. No surprises here.
Looks like its time to scale.
2000 users, 120 minutes, 2 users/sec ramp up, from London
Before adding Memcached to the setup I wanted to see how it scaled without it. That means adding more hardware. For this test I added four more application servers (Nginx + PHP) to the load balancer and ran the test again.
As you can see from the request/failure graph we experience roughly linear growth in our maximum requests/second. Given we originally maxed out at ~20 req/s on one machine, maxing out at ~100 req/s with five machines seems like exactly the sort of result that I would expect to see. The response time distribution also started to look better:
Obviously a 90% score of 4 seconds isn’t awesome, but it is a lot better than the previous test. I did make a tiny tweak to the load balancer configuration that may have helped though. I decided to use the ‘least connections’ options instead of ’round robin’. ‘Least connections’ tells the load balancer to send traffic to the app server with the least number of active connections. This should help with dog piling on a server with a few slower connections.
Given the results above we can safely assume linear growth tied to the number of app servers that we have for quite some time. Meaning for each app server that I add I can expect to handle an additional ~20 req/s. With that in mind, I wanted to see what would happen if I enabled some caching on this cluster.
Gotta Go Fast
In my previous test of vertical scaling I used WP Total Cache to make things go quick. WP Total Cache generates static HTML pages for your site and then serves those. The benefit being that static pages are extremely fast to serve. In this test I wanted to try a more dynamic approach using Memcached and W3 Total Cache. W3 Total Cache takes a very different approach to caching by storing pages, objects, and database queries in Memcached. In general this caching model is more flexible, but possibly a bit slower. I installed Memcached on the same server as the NFS mount because it was under utilized. In a real production scenario I wouldn’t violate this separation of concerns.
Once I enabled W3 Total Cache and re-ran the last test I got some pretty great results.
With W3 Total Cache enabled and 5 app servers we settled in at ~370 requests/second. More impressive is that we only saw 5 failures during the entire test. For perspective Kernl pushed 1,329,470 requests at the WordPress cluster I created. Thats a failure rate of 0.0003%.
My favorite part of this test was the response time distribution. Without having to wait on MySQL for queries the response times became crazy good.
99% of requests finished in 29ms. And the outlier at 100% was only 2.5 seconds. Not bad for WordPress.
Being the good software developer that I am I wanted to push this setup to it’s limits. So I decided to try a test that is an order of magnitude more difficult:
20,000 users, 10 users/sec ramp up, for 60 minutes, from London
Things didn’t go great but not because of WordPress. I won’t show any graphs of this test but I started to get limited by the network card on the NFS/Memcached machine. Digital Ocean says that I can expect around 30MB/sec out of a given droplet and with this test I was starting to bump in to that limit. If I wanted to test it further I would have had to load balance Memcached which felt a little bit outside of scope. In a real production scenario I would likely pay for a hosted Memcached service to deal with this for me.
With Kernl I’m always weighing the build versus buy question when it comes to infrastructure and services. Given how much effort I had to put in to making this setup horizontally scalable and how much effort it would take to make it reproducible and manageable, it hardly seems worth creating and managing my own infrastructure.
Aside from my time the cost of the hardware was also not cheap.
Load Balancer – $10 / month
MySQL Database – $27 / month
Memcached (if separate from NFS) – $5 / month
NFS Mount (if separate from Memcached) – $5 / month
At $72 / month I could easily have any of the managed WordPress hosting companies (GoDaddy, SiteGroup, WPEngine, etc) run my setup, handle updates, security, etc. The only potential hiccup is the traffic limits they place on your account. This setup can handle millions of requests per day and while their setups can too, they’ll charge you a hefty fee for it.
As with any decision about hardware and scaling the choice varies from person to person and organization to organization. If you have a dedicated Ops team and existing hardware, maybe scaling on your own hardware makes sense. If you’re a WordPress freelancer and don’t want to worry about it, maybe it doesn’t. IMHO I wouldn’t scale WordPress on my own. I’d rather leave it to the professionals.
As I went through the first round of tests I kept thinking: “I wonder how they achieve that level of performance with WordPress?”. This blog post and the post that will follow it are a chronicle of my attempts to scale WordPress to the levels that these managed cloud providers are achieving in an economical fashion.
Having done a handful of load tests against other cloud providers I figured that I should hold myself to the same tests. The scale I’m going to try and achieve is:
200 concurrent users for 10 minutes.
2000 concurrent users for 2 hours.
20000 concurrent users for 1 hour.
The first test is just to shake out bugs in the load test, but I have seen some providers start to throw errors at that level. The second test is testing for sustained load. And the third test is simulating a heavy traffic spike.
To get things started I created a super basic WordPress install on a $5/month Digital Ocean droplet. The droplet specs:
The first test went really well. I didn’t performance tune anything and didn’t have any sort of cache enabled. After 10 minutes we had settled into 35 requests / second and didn’t see any failures at all.
For 90% of people this is probably more performance than they would ever need. The response time distribution was even awesome. 100% of requests finished in ~500ms.
And Then The Wheels Fell Off
After my early success with the basic 200 user load test I thought it was time to throw some serious load at my WordPress install. This time I did the 2000 concurrent users for 2 hours test. At this point there still wasn’t any caching plugin installed.
As you can see things didn’t go well. We peaked at around 40 requests/s but then our failure rate started to increase is a really bad way. You can also see that we sorta stopped fielding requests after awhile. Looking at the system load information, you can see why things went poorly. The $5 droplet just couldn’t handle anymore.
As you would expect in this situation, the response time distribution was pretty dismal. In fact, this is the worst response time distribution that I’ve seen in all the load testing that I’ve performed 🙂
After reaching the max capacity of the $5 droplet with no tuning, it was time to try and scale.
WP Super Cache Me
WP Super Cache is a caching plugin that generates static HTML files of your WordPress site. For read-heavy sites its tough to beat in terms of performance. The blog that I’m load testing with definitely falls into this category so it was the right choice for this test.
This test was simply a repeat of the last test (2000 users, 2 hours, etc) but with caching enabled. The results were pretty great.
With WP Super Cache enabled on the $5 droplet we were able to field around 135 req/s, however you can see that our error rate was elevated during much of the test. If you expect to see this sort of traffic on a regular basis then this isn’t a great outcome but still pretty respectable for $5/month. The response time distribution tells a different story though:
Whats the point of serving 135 req/s if it takes more than 10s per request for 33% of your users? People are just going to close the tab after 1 second so we obviously have some more work to do.
Scale Me Up
When scaling any website you have 2 options (and they aren’t mutually exclusive):
Scale up (vertically)
Scale out (horizontally)
Scaling up is usually the easiest thing to do because you’re basically throwing more hardware at the problem. Digital Ocean makes scaling up really easy so I decided to give that a go first. This test was once again just a repeat of the 2000 users for 2 hours test but with better hardware. I upgraded from 1 CPU to 3 CPUs which seemed like the right choice given that it didn’t appear that memory was the problem in my previous tests.
So how did it go? Real good actually. Once all the load test users were sending requests we settled in at 344 request / second. If that rate continued all day that comes out to 29 million requests. Not bad for $15/month.
We’re still seeing some failures, but relative to the number of requests it is much lower than the previous test. We can do better but that will likely take some more vertical or horizontal scaling. But what about the response times? Turns out adding more CPUs helped out quite a bit.
100% of our requests finished in under 1.6s. While not SUPER fast it is still a respectable showing for the sort of load that this box was receiving. Even more impressive is that 90% of requests finished in under 100ms and some of that could be attributed to latency. The droplet was spun up in NYC3 and the load test generators were in Toronto, Canada.
The biggest selling point (for me) with WordPress is that it’s easy. With very little configuration or effort I was able to get a WordPress installation serving > 300 req/s. Sure it wasn’t perfect. I am still getting elevated error rates and vertical scaling can only take us so far. But this is likely good enough for almost anyone.
In part 2 of this series I’ll attempt to scale WordPress horizontally by using shared block storage to host the WordPress file system, a dedicated MySQL machine, and a bunch of application servers running behind a load balancer. The goal is serve 20,000 (or more!) concurrent users for 1 hour without any errors and response times below 1 second. Follow @kernl_ on Twitter to be notified when part 2 is published!
At the beginning of December Kernl launched a closed beta for our WordPress Load Testing service. As part of the bug shakedown we’ve been spending some time load testing different managed WordPress hosting services. Some of previous tests include WordPress.com, CloudWays, and GoDaddy. For this test, we turned our sights on ChemiCloud.
How do we judge the platform?
Using Kernl’s load testing feature we run 3 different load tests against the target system.
The Baseline – This is a simple baseline load test that we use to verify that our configuration is correct and that the target can handle even minor traffic. It consists of 200 concurrent users, for 10 minutes, ramping up at 2 users / second, with traffic originating in San Francisco.
Sustained Traffic – The sustained traffic test mimics what traffic might look like for a read-heavy website with a lot of visitors. This load test consists of 2000 concurrent users, for 2 hours, ramping up at 2 users / second, with traffic originating from San Francisco.
Traffic Spike – This test is brutal. We use it mimic the sort of traffic that your WordPress site might experience if a link to it were shared by a Twitter or Instagram celebrity. The load test consists of 10,000 concurrent users, for 1 hour, ramping up at 10 users / second, with traffic originating from San Francisco.
All traffic for this test is generated out of Digital Ocean’s SFO2 data center.
What ChemiCloud plan was used?
ChemiCloud has several different tiers for managed WordPress hosting. We decided on the “Oxygen” plan. At a high level this seemed to align well with the hosting that we tested thus far.
This load test is intentionally simple. It is read heavy. Many WordPress sites have this sort of traffic profile, but not all do. If you need to perform a WordPress load test with a different traffic profile Kernl supports this. Ideally we should also do multiple tests over time to make sure that this test wasn’t an outlier. Future load test articles will hopefully include this sort of rigor but for now this test can give you reasonable confidence in how you can expect ChemiCloud to perform under a read-heavy load.
As most of the hosting providers that we test do, ChemiCloud performed well on the baseline test. They settled in at right around 25 requests / second.
We did see a few failures towards the end of the test, but it appears that it was only a spike. Once the spike passed we didn’t see any more errors for the duration of the test.
The response time distribution for ChemiCloud was solid for this baseline test. 99% of requests finished in 550ms. If we go further down the distribution you can see that 95% of requests finished in ~250ms which is quite good. Even the 100% outlier still wasn’t that bad.
For the sustained traffic test ChemiCloud did a great job serving requests while keeping response times down. As you can see from the graph below, the test settled in to right around 260 requests / second. The journey to that many users was smooth and there aren’t any surprises on the graph.
There were a few failures during the test period, but it appears that they were only a temporary blip. You can see that about half-way through the test we ran into ~32 failures. After that we didn’t see any more for awhile, and then we had one more before not seeing any again for the rest of the test. For some perspective, we performed 1,861,230 requests again ChemiCloud and only 33 failed. Thats a failure rate of 0.0017%! Nice work team ChemiCloud.
The response time distribution was pretty great for the sustained test as well. While there was an outlier at 100% (which is common), 99% of requests finished in under 400ms. Thats an effort worthy of praise with WordPress!
The traffic spike load test is brutal for any host. Nobody ever expects to see this kind of traffic out of nowhere so few are prepared for it. ChemiCloud handled the traffic rather well though. We eventually reached 1200 requests / second which is pretty impressive for a plan that costs $17.95 a month. There weren’t any surprises on the way up to that level of traffic, but as you’ll see we did start to see error rates increase.
At about 15 minutes into the load test we started to see an uptick in failure rates. The rate of failure stayed consistent throughout the test after that. This is a fairly common pattern when hosts become overloaded with traffic. In general ChemiCloud performed well even with these failures. We sent 4,332,244 requests to ChemiCloud over an hour period and 134,893 failed. For this sort of load test a failure rate of 3.1% isn’t bad.
The most interesting graph from this load test was the response time distribution. You would expect to see a general degradation of response time performance as request failures increased but that wasn’t the case at all. Everything below the 99th percentile performed remarkably well considering the traffic we threw at it. 98% of requests finished in under 370ms. Great work!
ChemiCloud competes well with the other hosts that we’ve tested. They have a solid price-point and you get a lot of control over your WordPress environment. If you need a host that can handle some solid traffic spikes they are a good choice.
Want to be part of the Kernl WordPress Load Testing Beta? Sign up and then send an email to email@example.com