Cloudways WordPress Performance Value Review

Cloudways is a managed WordPress hosting provider that allows you to deploy your WordPress site onto several different platforms. These platforms are:

Cloudways home page
Cloudways!

If you are just starting with Cloudways it can be tough to figure out where your money is best spent! The prices are different for each platform and how they are sized isn’t them same either. This review gathers data using Kernl’s WordPress Load Testing tool and then massages that data into someone easy to digest.

Load & performance test your own WordPress site with Kernl’s WordPress Load Testing service!

What Was Tested?

We tested the following Cloudways setups:

  • Google Cloud – A “small” instance out of their Northern Virginia data center. $39.36 / month.
  • Amazon Web Services – A “small” instance out of their Northern Virginia data center. $36.51 / month.
  • Vultr – “2GB” plan out of their New Jersey data center. $23 / month
  • Linode – “2GB” plan from their New Jersey data center. $24 / month
  • Digital Ocean – “2GB” plan from one of the New York City data centers. $22 / month.

And we had the following load testing setup:

  • Content was an export of this blog
  • 1000 concurrent users
  • 30 minute duration
  • 3 users per second ramp up
  • Load generating machines were in the San Francisco #2 Digital Ocean data center.

Overall Cloudways Performance and Value

Each provider performed extremely well in our tests. No errors were reported and they all reached over 700 requests / s sustained traffic. However not all hosts are created equal.

AWSGCPLinodeVultrDigital Ocean
Costs / Month (cents)36513936240023002200
Total Requests13409671274018144230114557221445759
$ / 1000 req (cents)2.723.091.661.581.52

Given that each load test was exactly the same, you can see that some hosts performed better than others.

Cost per 1000 requests (in cents)

AWS and GCP are a bit more costly than Linode, Vultr, and Digital Ocean and they also performed worse than the lower cost providers.

Cloudways Provider Response Times

While cost per requests is a good metric we also care about the quality of those requests. Quality can be measured in a number of ways, but response time is often a “good enough” measure of how well a particular host performs. For the providers that you can get through Cloudways, the variance in request quality is extremely interesting.

The chart below shows the 99th percentile response time performance for each provider. It means that 99% of all requests during the load test finished at or below this time.

99th percentile performance

You can see that lower cost providers once again out-perform the higher cost AWS and GCP in pretty significant ways. The really interesting result here is that Vultr responded to 99% of requests in 240ms or less! Way to go Vultr!

Conclusions

If you aren’t sure which provider is the best value on Cloudways, from a raw cents per requests perspective you can’t go wrong with Linode, Vultr, and Digital Ocean. It’s the same story when you consider the quality (re: response time) of service: Vultr, Linode, and Digital Ocean are the clear winners here.

Full WordPress Load Test Results:

Load & performance test your own WordPress site with Kernl’s WordPress Load Testing service!

Load Testing Vultr’s New High Frequency Servers with WordPress

Run your own WordPress Load Tests with Kernl!

Back in June Vultr announced the general availability of their new “High Frequency” servers. Reading through the announcement I was intrigued by their claim of using “3+GHZ processors and blazing fast NVMe storage!” and immediately wondered what WordPress performance would look like versus their regular Cloud Compute offering.

What was tested?

To get a better idea of the performance characteristics of the Vultr High Frequency Compute servers, I ran several types of load tests with all Vultr servers sitting in their Silicon Valley data center:

  • 200 concurrent users from New York City (Digital Ocean NYC3)
  • 2000 concurrent users from New York City (Digital Ocean NYC3) & London (Digital Ocean LON1)

And I tested the following scenarios:

All servers with the exception of the “performance tuned” ones were built using the pre-built WordPress image that Vultr offers with no caching or performance tuning done. The “performance tuned” servers were created using Nginx, PHP-FPM, MariaDB, Memcached, and W3 Total Cache.

Results

Starting with the Vultr High Frequency boxes, lets break down performance.

$6 32GB NVMe

The $6 Vultr High Frequency(HF) machine performed well against the roughly equivalent $5 Vultr Cloud Compute(CC) instance. If we take a look at the requests/failures per second charts you can see that the HF machine out-performed the CC instance by quite a lot.

$6 32GB NVMe Vultr High Frequency Requests/Failures
$5 25GB SSD Vultr Cloud Compute Requests/Failures

As you can see the HF server was able to handle roughly twice as many requests per second as the CC server without any errors. Let’s check out the average and median response times as well as the response time distribution.

$62 32GB NVMe Vultr High Frequency Response Time
$6 32GB NVMe Vultr High Frequency Response Time
$62 32GB NVMe Vultr High Frequency Response Time Distribution
$6 32GB NVMe Vultr High Frequency Response Time Distribution

You can see that as the test progressed response times got steadily worse until leveling out at around 2.5s. The response time distribution shows that 99% of requests finished in 3s or under, with the 100th percentile outlier coming in at just under 6s. This is honestly pretty good performance for no tuning at all.

$5 25GB SSD Vultr Cloud Compute Response Time
$5 25GB SSD Vultr Cloud Compute Response Time
$5 25GB SSD Vultr Cloud Compute Response Time Distribution
$5 25GB SSD Vultr Cloud Compute Response Time Distribution

The response times for the $5 cloud compute instance were about 2 seconds worse than the high frequency instance, with the average coming in at around 4.5. The response time distribution was worse from a performance perspective, but better from a consistency perspective with the spread between the 50th percentile and the 100th percentile only being ~1.3s.

At a glance, it looks like the Vultr High Frequency server out-performs the cloud compute in a significant way for only $1 extra. However, our next tests show that it might not be so simple.

$24 128GB NVMe (Performance Tuned)

The next set of tests that were performed were 2000 concurrent users on a performance tuned setup. The traffic originated from a cluster of servers in both New York and London, with the host server being in Vultr’s Silicon Valley data center.

First, lets take a look at the requests per second handled by the $24 High Frequency server and the $20 Cloud Compute server.

$24 128GB NVMe Vultr High Frequency Requests

Now, 2000 concurrent users is a lot. I think that the HF machine did quite well overall, but I confess that I did expect a bit more out of it. It topped out at around 1250 requests per second, with errors hovering in the ~175 requests per second range. If left to run for another 30 minutes I think it would have reached closer to 1350 requests per second.

$20 80GB SSD Vultr Cloud Compute Requests

This is where things start to get interesting. The Vultr marketing team bills the high frequency machines as heads and shoulders above the regular cloud compute machines. Maybe it is for some workloads. But for this test it doesn’t seem much better at all. Looking at both graphs you can see that the HF machine topped out a bit higher, and had fewer errors, but not enough to warrant the extra cash (in my opinion). Lets see what sort of story the response times tell.

$24 128GB NVMe Vultr High Frequency Response Time
$24 128GB NVMe Vultr High Frequency Response Time Distribution

On the performance tuned box you can see that the requests that did complete successfully were all quite fast. The average response time was between 100ms and 200ms throughout the entire test. The response time distribution was solid as well, with 99% of requests finishing in under 500ms. Not bad for 2000 concurrent users.

Now let’s take a look at how the comparable cloud compute server did.

$20 80GB SSD Vultr Cloud Compute Response Time
$20 80GB SSD Vultr Cloud Compute Response Time Distribution

For the Vultr Cloud Compute instance response times also hovered between 100ms-200ms, although just a hair higher than the high frequency servers. The response time distribution was excellent as well, with 99% of requests coming in under 500ms. The main difference here is that there was a 100th percentile outlier here at the high frequency server didn’t have.

Thoughts & Conclusions

From what I can tell, the Vultr High Frequency servers are better than the Cloud Compute servers, but maybe only for certain types of workloads. You can see in the $5/$6 test that they easily out-performed the cloud compute instances, but in the more expensive $20/$24 test the results weren’t so cut and dry.

If we look at it from a requests per dollar standpoint, the cloud compute instance is actually a better value for this type of workload.

Requests per Dollar

I suggest running extensive load tests against the Vultr High Frequency machines before making any effort to switch over. They might be better for you. They might perform the same for more money.

In conclusion: ¯\_(ツ)_/¯

Run your own WordPress Load Tests with Kernl!

Load Testing the CloudWays Managed WordPress Service

At the beginning of December Kernl launched the closed beta of it’s WordPress load testing service. As a test to shake out any bugs we’ve decided to run a blog series load testing managed WordPress services. Today we’re going to talk about the CloudWays managed WordPress service. In particular, CloudWays deployed to Vultr.

How is the platform judged?

Cloudways will be tested using 3 different load tests:

  • The Baseline – This is a 200 concurrent user, 10 minutes, 2 user/s ramp up test from San Francisco. This test is used to verify the test configuration and to make sure that Cloudways doesn’t go belly-up before we get started 🙂
  • The Sustained Traffic Test – This test is for 2000 concurrent users, ramps up at 2 users/s, from San Francisco, for 2 hours. The sustained traffic test represents a realistic load for a high traffic site.
  • The Traffic Spike Test – This test is intentionally brutal. It simulates 20000 concurrent users, ramps up at 10 users/s, from San Francisco, for 1 hour. It represents the sort of traffic pattern you might see if a Twitter celebrity shared a link to your blog.

What CloudWays plan was used?

For this test we used the lowest tier plan available while hosting on Vultr. The cost of the plan is $11 / month and includes full SSH access to the box that CloudWays deploys your WordPress instance on.

CloudWays $11 / month plan hosted on Vultr
Selected CloudWays Plan

Where does the traffic originate?

The traffic for this load test originates in Digital Ocean‘s SFO2 (San Francisco) data center. The Vultr server lives in their Seattle data center.

The baseline load test

200 concurrent users, 2 users / s ramp up, 10 minutes, SFO

The baseline WordPress load test that we did with CloudWays is used to test configuration. CloudWays performed well on this test. You can see from the request graph that we settled in at around 25 requests / second.

CloudWays BaseLine Load Test Requests
CloudWays Baseline Test – Requests per second

The failure graph for the baseline load test was empty, which is generally expected for the baseline test.

CloudWays BaseLine Load Test Failures
CloudWays Baseline Test – Failures

Finally the request distribution graph for the baseline test. You can see that 99% of the requests finished in ~200ms. There was at least one outlier at the ~5000ms mark, but this isn’t uncommon for load tests.

CloudWays BaseLine Load Test Response Time Distribution
CloudWays Baseline – Response Time Distribution

The sustained heavy traffic load test

2000 concurrent users, 2 users / s ramp up, 2 hours, SFO

The sustained traffic load test represents what a WordPress site with high readership might look like day over day.  The CloudWays setup responded quite well for the hardware that it was on.

CloudWays Sustained Heavy Traffic Load Test - Requests
CloudWays Sustained Load Test – Requests

You can see that performance was great for the first 10% of the test. The CloudWays setup had no trouble handling the load thrown at it. However once we started getting to around 85 requests / second the hardware had trouble keeping up with the request volume. You can see from the choppy behavior of request graph that the Varnish server which sits in front of WordPress was starting to get overwhelmed by the request volume. Considering that this particular CloudWays plan was deployed to a low-level Vultr VM, this performance isn’t bad at all.

The failure graph was a little disappointing, but not unexpected knowing the hardware that we tested on. It is very likely that if we tested on a more robust underlying Vultr box we would have had much better results. You can see that failures increased in a fairly linear rate through the whole load test.

CloudWays Sustained Heavy Traffic Load Test - Failures
CloudWays Sustained Load Test – Failures

The final graph for this test is the response distribution graph. This graph shows you for a given percentage of requests how many milliseconds they took to complete. In this case CloudWays didn’t perform great, but once again I’ll point to the fact that the underlying Vultr hardware isn’t that robust.

CloudWays Sustained Heavy Traffic Load Test - Response Time Distribution
CloudWays Sustained Load Test – Response time distribution

From the graph you can see that 99% of requests completed in ~95 seconds. Yes, you read that correctly. You can interpret this graph as you like but taking the other graphs into consideration you can see that Varnish and the underlying Vultr hardware were completely overwhelmed. Knowing that makes this a little less terrible. We suspect that a smaller load test (maybe 750 concurrent users?) might yield a far better response time distribution. Once a server becomes overwhelmed the response time distribution tends to go in a bad direction.

The traffic spike load test

20000 concurrent users, 10 users / s ramp up, 1 hour, SFO

Given what we know about the sustained traffic load test your expectations for how this test went are probably spot on. CloudWays did as good as can be expected with how the underlying hardware is allocated, but you would likely need to upgrade to a much larger plan to handle this level of traffic. We ended up stopping this load test after about 30 minutes due to the increased failure rate.

CloudWays Traffic Spike Load Test - Requests
CloudWays Traffic Spike Load Test – Requests per Second

The requests per second never really leveled out. It isn’t clear what the underlying reason was for the uneven level at the top of the graph. Regardless, top-end performance was similar to the sustained traffic test.

The failure chart looks as we expected it to. After a certain point we start to see increased failure rates. They continue up and to the right in a mostly linear fashion.

CloudWays Traffic Spike Load Test - Failures
CloudWays Traffic Spike Load Test – Requests per Second

The response time distribution is really bad for this test.

CloudWays Traffic Spike Load Test - Response Time Distribution
CloudWays Traffic Spike Load Test – Response Time Distribution

As you can see 80% of the requests finished in < 50s which means that 20% of the requests took longer than that. The 99% mark was only reached after > 200s, at which point the user is likely long gone.

Conclusions

For $11 / month the CloudWays managed WordPress installation did a great job, but there are better performers out there in the same price range (GoDaddy for instance). For the sake of this review which only looks at raw performance, CloudWays probably isn’t the best choice. But if you’re looking for good-enough performance with extreme flexibility then you would be hard pressed to find a better provider.

Want to run load tests against your own WordPress sites? Sign up for Kernl now!