Load Testing Vultr’s New High Frequency Servers with WordPress

Run your own WordPress Load Tests with Kernl!

Back in June Vultr announced the general availability of their new “High Frequency” servers. Reading through the announcement I was intrigued by their claim of using “3+GHZ processors and blazing fast NVMe storage!” and immediately wondered what WordPress performance would look like versus their regular Cloud Compute offering.

What was tested?

To get a better idea of the performance characteristics of the Vultr High Frequency Compute servers, I ran several types of load tests with all Vultr servers sitting in their Silicon Valley data center:

  • 200 concurrent users from New York City (Digital Ocean NYC3)
  • 2000 concurrent users from New York City (Digital Ocean NYC3) & London (Digital Ocean LON1)

And I tested the following scenarios:

All servers with the exception of the “performance tuned” ones were built using the pre-built WordPress image that Vultr offers with no caching or performance tuning done. The “performance tuned” servers were created using Nginx, PHP-FPM, MariaDB, Memcached, and W3 Total Cache.

Results

Starting with the Vultr High Frequency boxes, lets break down performance.

$6 32GB NVMe

The $6 Vultr High Frequency(HF) machine performed well against the roughly equivalent $5 Vultr Cloud Compute(CC) instance. If we take a look at the requests/failures per second charts you can see that the HF machine out-performed the CC instance by quite a lot.

$6 32GB NVMe Vultr High Frequency Requests/Failures
$5 25GB SSD Vultr Cloud Compute Requests/Failures

As you can see the HF server was able to handle roughly twice as many requests per second as the CC server without any errors. Let’s check out the average and median response times as well as the response time distribution.

$62 32GB NVMe Vultr High Frequency Response Time
$6 32GB NVMe Vultr High Frequency Response Time
$62 32GB NVMe Vultr High Frequency Response Time Distribution
$6 32GB NVMe Vultr High Frequency Response Time Distribution

You can see that as the test progressed response times got steadily worse until leveling out at around 2.5s. The response time distribution shows that 99% of requests finished in 3s or under, with the 100th percentile outlier coming in at just under 6s. This is honestly pretty good performance for no tuning at all.

$5 25GB SSD Vultr Cloud Compute Response Time
$5 25GB SSD Vultr Cloud Compute Response Time
$5 25GB SSD Vultr Cloud Compute Response Time Distribution
$5 25GB SSD Vultr Cloud Compute Response Time Distribution

The response times for the $5 cloud compute instance were about 2 seconds worse than the high frequency instance, with the average coming in at around 4.5. The response time distribution was worse from a performance perspective, but better from a consistency perspective with the spread between the 50th percentile and the 100th percentile only being ~1.3s.

At a glance, it looks like the Vultr High Frequency server out-performs the cloud compute in a significant way for only $1 extra. However, our next tests show that it might not be so simple.

$24 128GB NVMe (Performance Tuned)

The next set of tests that were performed were 2000 concurrent users on a performance tuned setup. The traffic originated from a cluster of servers in both New York and London, with the host server being in Vultr’s Silicon Valley data center.

First, lets take a look at the requests per second handled by the $24 High Frequency server and the $20 Cloud Compute server.

$24 128GB NVMe Vultr High Frequency Requests

Now, 2000 concurrent users is a lot. I think that the HF machine did quite well overall, but I confess that I did expect a bit more out of it. It topped out at around 1250 requests per second, with errors hovering in the ~175 requests per second range. If left to run for another 30 minutes I think it would have reached closer to 1350 requests per second.

$20 80GB SSD Vultr Cloud Compute Requests

This is where things start to get interesting. The Vultr marketing team bills the high frequency machines as heads and shoulders above the regular cloud compute machines. Maybe it is for some workloads. But for this test it doesn’t seem much better at all. Looking at both graphs you can see that the HF machine topped out a bit higher, and had fewer errors, but not enough to warrant the extra cash (in my opinion). Lets see what sort of story the response times tell.

$24 128GB NVMe Vultr High Frequency Response Time
$24 128GB NVMe Vultr High Frequency Response Time Distribution

On the performance tuned box you can see that the requests that did complete successfully were all quite fast. The average response time was between 100ms and 200ms throughout the entire test. The response time distribution was solid as well, with 99% of requests finishing in under 500ms. Not bad for 2000 concurrent users.

Now let’s take a look at how the comparable cloud compute server did.

$20 80GB SSD Vultr Cloud Compute Response Time
$20 80GB SSD Vultr Cloud Compute Response Time Distribution

For the Vultr Cloud Compute instance response times also hovered between 100ms-200ms, although just a hair higher than the high frequency servers. The response time distribution was excellent as well, with 99% of requests coming in under 500ms. The main difference here is that there was a 100th percentile outlier here at the high frequency server didn’t have.

Thoughts & Conclusions

From what I can tell, the Vultr High Frequency servers are better than the Cloud Compute servers, but maybe only for certain types of workloads. You can see in the $5/$6 test that they easily out-performed the cloud compute instances, but in the more expensive $20/$24 test the results weren’t so cut and dry.

If we look at it from a requests per dollar standpoint, the cloud compute instance is actually a better value for this type of workload.

Requests per Dollar

I suggest running extensive load tests against the Vultr High Frequency machines before making any effort to switch over. They might be better for you. They might perform the same for more money.

In conclusion: ¯\_(ツ)_/¯

Run your own WordPress Load Tests with Kernl!

What’s New With Kernl – June 2019

Hello everyone! Kernl got some pretty interesting updates this month, so let’s dive in!

Features

Average & Median Response Times for Load Tests – Kernl’s WordPress load testing services has always had this data available, we just never surfaced it in a way that was easy to consume. You’ll now see a new tab when you run a load test with information about average and median response times during the course of your test.

Progress Bar for Load Test Initialization – When you create a load test you will now see a progress bar that indicates how far in the process of instantiating your infrastructure we are. Prior to this update it was easy to think that the process had stalled out or broke.

Bug Fixes & Other

  • Load test site ownership verification would fail to certain types of HTML minification. This has been resolve.
  • Our internal analytics services has been refactors to be a singleton. This reduced each app server’s memory footprint by 5MB.
  • The public license validate endpoint now has a 10 second cache on it. This helps us deal with any sort of burst traffic from license validations.
  • Home page Javascript has been minified and concatenated into a single script to decrease load time.
  • All packages have been update on our servers.

Thats it for June! If you have any questions reach out to jack@kernl.us.

Digital Ocean 30 Hour WordPress Load Test for Reliability and Consistency

Perform your own 30 hour load tests with Kernl!

Over the past 5 months I’ve been writing a lot of different articles testing WordPress performance when under heavy load. One of the comments that I often receive is “Yes, but how reliable is the host over time?”. To determine that answer I made some changes to Kernl that would allow customers to do long duration tests against their providers with a steady load. Given my affinity for Digital Ocean, I figured that would be a great first host to test.

What Was Tested & How I Tested It

Digital Ocean has several data centers across the globe and I figured that I should test each of these data centers to see how reliable they were. For this test I ran a single load test against the following data centers for 30 hours with a 25 concurrent users:

  • New York City (NYC3)
  • Toronto (TOR1)
  • Bangalore (BLR1)
  • Frankfurt (FRA1)
  • London (LON1)
  • Singapore (SGP1)
  • Amsterdam (AMS3)

All requests were made from the Digital Ocean data center in San Francisco (SFO2). The target of each load test was a simple $5 / month droplet with the WordPress image from the Digital Ocean marketplace installed on it.

Results

The table below summarizes the results for all of the long duration load tests. Click the region to see more details of the load test.

Example Load Test Results Page
RegionRequestsFailuresFailure %Req/s Avg
NYC32.49M1390.005%23
TOR12.46M1260.005%22/23
BLR12.17M1150.005%20
FRA12.28M14050.06%21
LON12.33M13350.05%21
SGP12.29M980.004%21
AMS32.30M2280.009%21

As you can see from the results above Digital Ocean’s reliability is excellent across the entire testing period. Even the data centers with the highest error rate (Frankfurt, London) had an incredibly small error rate. I’m not going to add the response time distribution results here because they were uniformly excellent.

Anomalies

  • The Bangalore (BLR1) test averaged only 20 req / s. Even though the geographic distance is far, I expected the response times to go up but to have a similar throughput.
  • The Toronto (TOR1) load test averaged 22 req / s for 28 hours, then jumped up to 23 req / s for that last two hours of the test. Maybe a noisy neighbor went quiet?
  • Digital Ocean’s Frankfurt (FRA1) and London (LON1) data centers had an order of magnitude more errors than the other data centers I tested.

Conclusions

As a whole, Digital Ocean performed very well in all of their data centers with a moderate amount of sustained traffic to a WordPress instance. In the future I would like to try running all of these tests twice with a different origin for each test run. It’s also worth noting that I haven’t done this type of test on any other platform yet. I hope that as I test more providers I’ll find out whether or not Digital Ocean performed as well as I think it did.

Perform your own 30 hour load tests with Kernl!

What’s New With Kernl – April 2019

Lots of great stuff came down the pipeline this month at Kernl, so lets dive in!

Features / Bugs / Improvements

  • Share Load Tests – You can now share your load tests publicly! Just click the “share” button that shows up after your load test is completed.
  • Analytics Widget – If you scroll down past the list of plugin/theme versions in the product detail view you’ll now see a widget with analytics data (for Kernl Analytics subscribers).
  • Load Testing Line Graph Performance – For large load testing data sets the line graph performance has been improved by disabling some of the flashier features of Chart.js
  • Analytics page now attempts to show the analytics for the first product in your product list upon entry.
  • NODE_ENV was not set to “production” on our app servers. It is now.
  • The transition to a fully managed load balancer is now complete.

Blog Posts

Thats it for this month!

Introducing Shareable WordPress Load Test Results

For quite awhile now Kernl has had the ability to throw some serious load at your WordPress site, but never a great way to share the results. Today that changes with the introduction of load test result sharing!

🎉

Why would I share my load test results?

The main use case for sharing your load test results is showing your clients that their new WordPress site can handle the traffic load that they expect. For someone like a YouTube or Twitter celebrity this can be a very real problem. Larger organizations also would enjoy this peace of mind.

How do I get started?

Easy! Just click the share button next to the load test that you want to share. You’ll be presented with your sharing URL that can be copy and pasted into Facebook, Twitter, Slack, or Email!

Can I see some examples?

Yes!

As always, feel free to reach out to Jack if you have any questions!

Vultr Cloud Compute -vs- Dedicated -vs- Bare Metal WordPress Performance

Load test your own WordPress site with Kernl! Getting started is free!

In the world of cloud computing there are a lot of different options to choose from. Normally you only need to choose how big your instance will be (2 vCPUs or 4, 2GB RAM or 6), but some cloud compute providers are upping their game and providing an even wider array of options and instance types for you to choose from.

Vultr has 3 different types of compute instances:

  • Cloud Compute – You get your own virtual server, but it is sharing hardware resources with lots of friends. Noisy neighbors can definitely be a problem.
  • Dedicated – Dedicated servers, but virtualized. I (think) it is possible to run in to noisy neighbor problems in this situation.
  • Bare Metal – Dedicated servers and hardware. No hypervisor and no noisy neighbors taking up your resources.

In this article we’re going to see how a very basic WordPress install performs on the different types of Vultr compute instances. We’ll do so using Kernl’s WordPress Load Testing service.

The Test

As per usual with Kernl load tests I imported this blog’s content into each load testing environment. The load test skews extremely read heavy. If you have a site that is write heavy or a mix you may see different results.

Each test was performed for 1 hour with 2000 concurrent users generating load from London and New York to Vultr’s data center in New Jersey.

Configuration

For this test I used Vultr’s pre-built WordPress image with no caching. A lot of readers might say “But you can get much better performance using X or Y!”, and they would be right! But I’m not testing Apache vs Nginx performance, or W3 Total Cache vs WP Rocket, I’m testing Vultr hardware under load in a real world scenario. I simply want to know at the end of this article if Vultr Cloud Compute, Dedicated, or Bare Metal is better for WordPress hosting.

Test 1: Vultr Cloud Compute $10 / Month

The first test I performed was against the $10 per month Vultr Cloud Compute offering. As expected of a $10/month VPS performance wasn’t awesome, but it also wasn’t terrible.

All the red of red land

As you can see, lots of failed requests and only maintaining throughput of 16 req/s. Not unexpected with a single core and 1 GB of RAM. After all, I was throwing 2000 concurrent requests per second at the server. The response time distribution was similarly bad.

Bad, but could be a lot worse.

Overall, the results for the $10 VPS were as expected. This isn’t really an apples to apples comparison (we’ll get to that later), but I wanted to give you an idea of what basic VPS instance performance looks like.

Test 2: Vultr Cloud Compute $80 / Month

With this test we’re starting to get closer to the cost of bare metal and dedicated instances. This server had 6 CPUs and 16GB of RAM. Considerably more robust than the $10 server.

Lots of red, but also blue!!!

This graph tells a much different story than the previous test. Performance peaked at 169 req/s and then leveled off at 100 req/s. We still saw a lot of errors, but once again this isn’t unexpected. Honestly if you started to get this much traffic you would likely start breaking up WordPress into its components (file system, PHP + Nginx, MySQL) and start scaling horizontally.

Much Lower Response Time Distribution

The response time distribution was much better for this server as well. The upper end was just as bad as the cheaper box, but the 90% and below ranges were pretty solid for the amount of traffic that was being received.

Test 3: Vultr Bare Metal $120 / Month

The Vultr Bare Metal server was the instance I was most excited about testing. I’ve always had a soft spot for hardware and getting access to a bare metal server is pretty cool. For $120 per month (on sale, price will rise to $300/month eventually) you get 8 CPUs and 32GB of RAM. This is a pretty serious server.

Oooh, 200 req/s.

Lots of blue on this graph but also the expected amount of red. You can see that throwing 2 more non-virtual CPUs and 2X the RAM made a pretty big difference. We peaked at 200 req/s and then leveled out at 125 req/s. For reference that is 17.2 million requests per day.

🙁

The lower end of the response time distribution was solid, but the upper end wasn’t great at all. With all of those errors it isn’t surprising that this is the case.

Test 4: Vultr Dedicated $120 / Month

I honestly had a tough time figuring out why Vultr priced the bare metal and dedicated instances so close to each other. Dedicated is clearly inferior (far fewer CPUs and RAM) so why would anyone choose it? Anyway, let’s take a look at the graph.

💩💩💩💩💩

This test peaked at 100 req/s and then leveled off at around 70. I really would expect a lot better performance for this sort of money.

Also 💩, but not as much 💩.

Response time distribution was similar to the other boxes. With all the failures it tends to skew pretty hard in the wrong direction. I’m sure that there is a use case for these dedicated Vultr instances, but it definitely isn’t hosting a WordPress site.

Conclusions

With all of this data it was pretty easy to graph which of these is the best value.

Value was calculated by taking the cost per month and dividing it by the maximum number of requests. Based on the performance we saw above the Vultr Cloud Compute instances seem like your best value for WordPress hosting. For WordPress hosting it looks like Vultr Bare Metal and Dedicated instances aren’t a great choice. As mentioned above, there are likely use cases where they are a good choice though (maybe workloads that require very consistent performance).

As with all of these tests, your mileage may vary! I highly recommend that you run load tests on any new host that you use to get an idea of what sort of performance you can expect.

Load test your own WordPress site with Kernl! Getting started is free!

$5 WordPress VPS Performance Showdown

Want to load test your own WordPress site? Sign up for Kernl now!

In the world of affordable WordPress hosting there is an array of different VPS providers to choose from. With so many choices how do you know who to choose? In addition to criteria such as ease of use and support, performance is a huge concern for most people deploying WordPress. In this article we’ll take a look at the performance of several different VPS providers in the $5 tier to see how they perform under load by using Kern’s WordPress Load Testing feature.

Who are we testing?

There are a lot of VPS providers out there providing machines in the $5 / month tier, so we’ve chosen 7 of the more popular providers to test against:

  • Digital Ocean (1GB RAM, 1vCPU)
  • Linode (Nanode 1GB, 1vCPU)
  • Vultr (1vCPU, 1GB RAM)
  • AWS Lightsail (1 GB RAM, 1 vCPU)
  • Hetzner (2vCPU, 4GB RAM)
  • Google Cloud (f1-micro: 1vCPU 600MB RAM)
  • Azure ($15 A0 1vCPU, 1GB RAM)

What tests will we run?

Using Kernl’s WordPress Load Testing feature we ran 2 different tests per provider:

  • No Cache – 200 concurrent users for 30 minutes. We used this test to see raw WordPress performance with no caching enabled.
  • Cached with W3 Total Cache + Memcached – 200 concurrent users for 30 minutes. We used this test to see what a more real-world scenario looks like. In general most people use some form of caching on their site.

VPS Setup

To make sure that our test setup was consistent across all VPS providers we followed this setup guide, where we ended up with the following versions of software:

With regards to regions, for every test we kept the VPS instances on the east coast of the United States with the exception of Hetzer where we had the VPS instance in Germany.

Results (Request & Failures)

First, let’s take a look at the request / second results across the different providers.

As you can see there is a wide spread of results depending on host. Honestly this wasn’t want I expected when I started. You’ll also notice that the Azure box cost $15/month. It was the closest I could get to finding a $5/month box in their interface (which I felt like I needed another degree in Computer Science to understand!).

So let’s visualize the data with no caching enabled.

We get lots of interesting results here. If you run a site where caching is difficult to do, your $5 will go much further depending on your host. Some notes:

  • Google Cloud and Azure performed TERRIBLE. I’m not sure why. Maybe it had to do with accessing the disk so frequently to load up PHP files? (but I expect that those were cached by PHP FPM or some other underlying process).
  • If you are in Europe, Hetzner is your friend. $5/month gets you 4x the ram and 2x the vCPUs are the next closest provider.
  • If you are in the US, AWS seems to be winning in this test but not by much. It feels like you would be fine going with Digital Ocean or Vultr.
  • Before making any real decisions on a host, I’d want to run 5 or so tests across different instances to make sure that there isn’t a lot of variance in my results. Noisy neighbors can often be a problem on VPS providers.

Now let’s take a look at a more realistic scenario where you have a caching plugin installed.

You’ll notice that there aren’t any error bars on this graph. Thats because each host was able to handle the load without having any. This isn’t too surprising since most of the requests would be served right out of memory via Memcached. Some notes on cached requests:

  • Once again, Google Cloud and Azure perform the worst out of any of these hosts. Given how highly regarded they are in the hosting ecosystem outside of WordPress I expected better performance.
  • All of the other providers posted impressive numbers (170 req/s – 180 req/s). At level I would probably choose whatever provider had the best support, user interface, and reliability.
  • I suspect that most of these boxes could handle a little bit more load before going under. If I increased this test by an order of magnitude (200 users to 2000 users) I think most of the providers would tap out before Hetzner does due to how much more RAM and CPU it has.

Results (Response Time Distribution)

While requests per second and failures per second are valuable metrics, in order to get a more holistic view of raw performance for these $5 VPS instances we need to look at response time distribution.

So how should you interpret this chart? For the percentage columns, each value is in milliseconds. If you look at the 99% column, you can see that 99% of Vultr requests returned in <= 3600ms. If you look at the 80% column for Vultr, you can see that 80% of Vultr requests returned in <= 3400ms. Let’s take a look at our un-cached results.

Some notes on this graph:

  • Any service that hit the 5000ms mark killed the request. I think that they likely would have gone far beyond that.
  • Once again, for $5 Hetzner is just crushing it. You simple can’t compete with 4GB RAM and 2vCPUs. Even with no caching their 99th percentile was under 2 seconds!
  • For our non-european readers, Digital Ocean, Vultr, and AWS seemed to perform the best. AWS remained remarkably consistent across the response time distribution range. This is a good thing.
  • Google Cloud… wtf? So 99% of your requests finished in <= 5 seconds, and then 90% of requests finished in <= 300ms? Something is fishy.

And now, let’s see how things change when caching is used.

As expected, most providers do very well when caching is used. Digital Ocean, Linode, Vultr, AWS, and Hetzner are all performing in the <= 200ms range (some lower!). It’s hard to decide who is better at this level due to latency due to geographic distance. The point is that you could choose any of those hosts and be OK when using a cache plugin. Once again, I’m struggling to figure out why someone would spend their $5 on Azure or Google Cloud.

Results (Total Requests)

Our final metric we need to look at before we can pass any judgement on our VPS providers is total requests. This in particular needs to be compared with response time distribution.


This graph does a great job explaining some of the discrepancies in the response time distribution data. When caching wasn’t enabled, Google Cloud and Azure barely even show up on the graph. More thoughts:

  • I’m 90% positive that the Google Cloud and Azure instances nearly stopped processing requests at some point. They were so overwhelmed that they just fell over. The data seems to support this.
  • Google Cloud and Azure are not the best place to spend $5. Even with caching I would be scared if there was ever a cache-miss.
  • Hetzner is the clear winner in un-cached data. Once again, this makes sense how much more machine you are getting for your money.
  • On the U.S. side of the ocean, AWS seems to win here, although not by much over Digital Ocean and Vultr. Once caching is taken into consideration they all perform roughly the same (accounting for latency between data centers and load generators).

Conclusions

If performance is all you care about for your $5 (and thats a big if), then choose Hetzner if you need a VPS in Europe. If you need a VPS in the US or elsewhere, choose AWS, Digital Ocean, or Vultr. Microsoft and Google are not great for $5.

Want to load test your own WordPress site? Sign up for Kernl now!

Measuring the Performance of WordPress on DigitalOcean Droplets

Earlier this year Kernl launched our WordPress Load Testing offering. Prior to the launch we had been doing a series of blog posts testing the performance of “managed” WordPress providers. In this blog post I’ll test the performance of Digital Ocean’s VPS solutions with a standard LEMP (Linux Nginx MySQL PHP-FPM) installation by scaling it vertically.

Want to measure your own WordPress performance under heavy load? Sign up for Kernl.

Server & WordPress Configuration

The Digital Ocean 1-click LEMP install is already configured to take full advantage of as many cores and as much RAM as is available. During the course of these tests I never needed to tweak any server settings. That doesn’t mean that more performance couldn’t have been extracted through configuration tweaks, but even with these settings we were able to gain useful information about WordPress performance on the different Digital Ocean droplet levels.

With regards to WordPress configuration… there isn’t any. No caching, CDNs, or compression. Just raw WordPress. Its easy to get high performance out of WordPress if you cache everything, but seeing what performance looks like in the worst-case scenario is far more interesting.

The Test

For this series of tests I imported the contents of my personal blog (Re-cycledair) to the target and then used Kernl to run the load tests. Due to the nature of my blog this test is very read heavy. While this isn’t realistic for everyone, for my personal blog the test is representative of the actual traffic that it receives.

The test itself looked like:

  • Concurrent users – 500
  • Ramp up – 2 users per second
  • Duration – 30 minutes
  • Request Rate – Each user makes 1 request every second
  • Droplet location – New York City
  • Load test generator location – Amsterdam

Results

As expected the amount of traffic that we could handle scaled linearly with the amount of hardware we were using (and cost).

Cost -vs- Performance

One interesting thing you’ll see in the graph is that in two places the requests per second actually trend down.

Data

Looking at the data above you can see that I tried Digital Ocean’s standard droplets as well as their CPU optimized droplets. The difference between them is cost and dedicated hyper-threads. From a cost perspective, you’re better off going with the standard droplet in the same price category. Personally I expected the CPU optimized droplet to perform better, but this might not be the best type of workload for it.

What About Caching?

Just for fun I tried out the one-click install of WordPress with LiteSpeed configured on a $5 / month droplet.

Things went really really really well.

🔥🔥🔥🔥🔥

Thats right. By the time the test was completed my little $5 / month droplet was receiving 1800 requests / second WITH NO ERRORS. For perspective, thats 4.6 billion requests per month.

Conclusions

The data does a good job speaking for itself here, but in general you should definitely stick to Digital Ocean’s standard droplets when running WordPress. Even without caching you can get really good performance out of the $40/month droplet.

Want to measure your own WordPress performance under heavy load? Sign up for Kernl.

Introducing WordPress Load Testing

In November 2018 Kernl launched the closed beta of our new WordPress load testing service. After a lot of changes based on the feedback we received we’re finally bringing WordPress Load Testing out of beta and into general availability.

Why Load Test?

There are a lot of different reasons to load test.

  • Infrastructure and Hosting – Kernl WordPress load testing gives you confidence that you are making the right decisions with your infrastructure and hosting. Looking to change hosts but aren’t sure how big or expensive of a plan you need? Run a load test.
  • Performance Testing – Load testing gives you confidence that the SQL query you just wrote isn’t going to collapse your website under load.
  • Confidence with your clients – Load testing lets you tell your clients with confidence that their new website can handle 100,000 visitors a day without any degradation in response time.
WordPress Load Testing graph
Load Test of Kernl Blog (horizontally scaled)

How Does It Work?

Kernl’s WordPress Load Testing solution makes load testing your WordPress site a breeze. You only need verify your ownership of the site with an easy to use WordPress plugin and then start testing. No coding or infrastructure management needed.

Start a load test in 1 minute

How Much Does it Cost?

Kernl’s WordPress Load Testing is included with your Kernl subscription. Usage per plan is as follows:

Plan TypeLoad TestsMax DurationMachine HoursRetention
Old Plans215 min55 days
Solo1060 min530 days
Agency30120 min3090 days
Enterprise &
Above
100300 min100365 days

If your needs don’t fit neatly into one of these categories feel free to reach out. We’d be happy to create a custom plan to suit your needs.

Sign Up Now

To get started with easy WordPress load testing, sign up for Kernl .

Adventures in Scalable WordPress Hosting: Part 2

Interested in testing your WordPress scalability? Check out the Kernl WordPress Load Testing beta program!

In part 1 of this series I explored scaling WordPress using WP Super Cache and by throwing more expensive hardware at the problem. In part 2 of this series we’ll go on adventure in horizontal scalability using load balancers, NFS, Memcached, and an externally hosted MySQL.

The Plan

To horizontally scale any app is an exercise in breaking things apart as much as possible. In the case of WordPress there are a few shared components that I wanted to break up:

  • File System – The file system is the most problematic part of scaling WordPress. Unless you change how WordPress stores plugins, themes, media, and other things you need to have a shared file system that all nodes in your cluster can access. There are likely some other solutions here, but this one provides a lot of flexibility.
  • MySQL – In many WordPress installs MySQL lives on the same machine as WordPress. For a horizontally scaled cluster this doesn’t work so we need a MySQL that is external.
  • Memcached – It was brought to my attention that during part 1 of this series using WP Super Cache to generate static pages was sort of cheating. In the spirit of making this harder for myself I introduced W3 Total Cache instead and will be using an external Memcached instance as the shared cache.

Now that the basic why and what is out of the way lets talk about the how. I’m a huge fan of Digital Ocean. I use them for everything except file storage so I’m going to use them for this WordPress cluster as well. Here’s how its going down:

  1. Create a droplet that will act as the file system for our cluster. Using NFS all droplets in the cluster will be able to mount it and use it for WordPress. I’m also going to use this for Memcached since NFS doesn’t take up many resources.
  2. Create a base droplet that has Nginx and PHP7.2-FPM installed on it. There is a little bit of boilerplate configuration here, but in general the install is typical. The only change to the Nginx configuration where I set the root directory to be the NFS mount. Use this base droplet to configure WordPress database settings.
  3. Use Compose.io create a MySQL database. I wanted something that was configured well that I didn’t have to think about. Totally worth the $27 / month.
  4. Once the above are done take a snapshot of the base droplet and use it to create more droplets. If all goes well you shouldn’t need to do any configuration.
  5. Using Digital Ocean’s load balancer service add your droplets to the load balancer.
  6. Voila! Thats it.
Ugly architecture diagram

No Cache Smoke Test

200 users, 10 minutes, 2 users/sec ramp up, from London

As with every load test that I do, the first test is always just to shake out any bugs in the load test itself. For this test I didn’t have any caching enabled and only a single app server behind the load balancer. It was effectively the same as the first load test I did during part 1 of this blog series.

As you can see from the graph below, performance was what we would expect from the setup that I used. We settled in to 21 requests / second with no errors.

As Expected.

The response time distribution wasn’t very great. 90% of requests finished in under 5 seconds, but thats still a very long time. Generally if I saw this response time distribution I would think that its time to add caching or scale up/out.

Not bad. Not great.

So. Many. Failures.

2000 users, 120 minutes, 2 users/sec ramp up, from London

The next test I decided to run was the sustained heavy load test. This is generally where I start to see failures from managed WordPress hosting providers. Given that I didn’t add any more app servers to the load balancer and had no caching things went as poorly as you would expect.

All the failures of failure land.

Everything was fine up until ~25 req/s and then the wheels fell off. The response time distribution was bad too. No surprises here.

50% of requests in 5 seconds, 100% in…33 seconds 🙁

Looks like its time to scale.

Horizontal Scalability

2000 users, 120 minutes, 2 users/sec ramp up, from London

Before adding Memcached to the setup I wanted to see how it scaled without it. That means adding more hardware. For this test I added four more application servers (Nginx + PHP) to the load balancer and ran the test again.

Linear Growth

As you can see from the request/failure graph we experience roughly linear growth in our maximum requests/second. Given we originally maxed out at ~20 req/s on one machine, maxing out at ~100 req/s with five machines seems like exactly the sort of result that I would expect to see. The response time distribution also started to look better:

Not perfect, but better.

Obviously a 90% score of 4 seconds isn’t awesome, but it is a lot better than the previous test. I did make a tiny tweak to the load balancer configuration that may have helped though. I decided to use the ‘least connections’ options instead of ’round robin’. ‘Least connections’ tells the load balancer to send traffic to the app server with the least number of active connections. This should help with dog piling on a server with a few slower connections.

Given the results above we can safely assume linear growth tied to the number of app servers that we have for quite some time. Meaning for each app server that I add I can expect to handle an additional ~20 req/s. With that in mind, I wanted to see what would happen if I enabled some caching on this cluster.

Gotta Go Fast

In my previous test of vertical scaling I used WP Total Cache to make things go quick. WP Total Cache generates static HTML pages for your site and then serves those. The benefit being that static pages are extremely fast to serve. In this test I wanted to try a more dynamic approach using Memcached and W3 Total Cache. W3 Total Cache takes a very different approach to caching by storing pages, objects, and database queries in Memcached. In general this caching model is more flexible, but possibly a bit slower. I installed Memcached on the same server as the NFS mount because it was under utilized. In a real production scenario I wouldn’t violate this separation of concerns.

Once I enabled W3 Total Cache and re-ran the last test I got some pretty great results.

Boom.

With W3 Total Cache enabled and 5 app servers we settled in at ~370 requests/second. More impressive is that we only saw 5 failures during the entire test. For perspective Kernl pushed 1,329,470 requests at the WordPress cluster I created. Thats a failure rate of 0.0003%.

My favorite part of this test was the response time distribution. Without having to wait on MySQL for queries the response times became crazy good.

The “bad” outlier is only 2.5s.

99% of requests finished in 29ms. And the outlier at 100% was only 2.5 seconds. Not bad for WordPress.

Going Further

Being the good software developer that I am I wanted to push this setup to it’s limits. So I decided to try a test that is an order of magnitude more difficult:

20,000 users, 10 users/sec ramp up, for 60 minutes, from London

Things didn’t go great but not because of WordPress. I won’t show any graphs of this test but I started to get limited by the network card on the NFS/Memcached machine. Digital Ocean says that I can expect around 30MB/sec out of a given droplet and with this test I was starting to bump in to that limit. If I wanted to test it further I would have had to load balance Memcached which felt a little bit outside of scope. In a real production scenario I would likely pay for a hosted Memcached service to deal with this for me.

Conclusions

With Kernl I’m always weighing the build versus buy question when it comes to infrastructure and services. Given how much effort I had to put in to making this setup horizontally scalable and how much effort it would take to make it reproducible and manageable, it hardly seems worth creating and managing my own infrastructure.

Aside from my time the cost of the hardware was also not cheap.

  • Load Balancer – $10 / month
  • MySQL Database – $27 / month
  • Memcached (if separate from NFS) – $5 / month
  • NFS Mount (if separate from Memcached) – $5 / month
  • Application Servers – $25 / month ($5 / month * 5 servers)
  • Total – $72 / month

At $72 / month I could easily have any of the managed WordPress hosting companies (GoDaddy, SiteGroup, WPEngine, etc) run my setup, handle updates, security, etc. The only potential hiccup is the traffic limits they place on your account. This setup can handle millions of requests per day and while their setups can too, they’ll charge you a hefty fee for it.

As with any decision about hardware and scaling the choice varies from person to person and organization to organization. If you have a dedicated Ops team and existing hardware, maybe scaling on your own hardware makes sense. If you’re a WordPress freelancer and don’t want to worry about it, maybe it doesn’t. IMHO I wouldn’t scale WordPress on my own. I’d rather leave it to the professionals.

Interested in testing your WordPress scalability? Check out the Kernl WordPress Load Testing beta program!