EasyWP Performance Review

Back in 2019 I became aware of a new player in the managed WordPress hosting space: EasyWP by Namecheap. As a Namecheap customer (domain registrar) I was curious about their entry into this market and gave them a try. When I initially did performance tests, the platform was still very new and it showed. But now its October 2020 and the platform has matured. Let’s dig into the performance

How did we test EasyWP Performance?

To performance test EasyWP we first created an export of this blog and imported it into the WordPress instance that EasyWP stood up for us. After that we ran a series of tests:

  • Authenticated Browsing
    • 50 users, 100 minutes
    • 500 users, 100 minutes
  • Unauthenticated Browsing
    • 200 users, 100 minutes
    • 2000 users, 100 minutes

All tests data generators were distributed in the DigitalOcean data centers in New York City, London, and Singapore.

What do each of these test types mean?

  • Authenticated Browsing – The load test user will authenticate and then browse around the front end of the website. In most cases this causes no caching to happen.
  • Unauthenticated Browsing – The load test user browses around the front end of the website. No login is performed. In most cases this means that we’ll hit cached pages.

EasyWP Authenticated Browsing – 50 Users

The first test we ran was an authenticated browsing test with 50 concurrent users. We used this as our baseline, because we assumed that EasyWP would be able to handle it without any issues.

Graph of EasyWP Authenticated - 50 Users Requests per Second
EasyWP Authenticated – 50 Users Requests per Second

As you can see we settled in at around 40 req/s with few errors. Not bad for un-cached performance.

Graph of EasyWP Authenticated - 50 Users Response Time
EasyWP Authenticated – 50 Users Response Time

Next we look at the average and median response times. With a median just under 200ms, this seemed like fairly good performance for being authenticated and with traffic distributed around the globe.

Graph of EasyWP Authenticated - 50 Users Response Time Distribution
EasyWP Authenticated – 50 Users Response Time Distribution

Finally, we take a look at the response time distribution. With the exception of the 100% outlier, 99% of all requests finished in under 660ms. This is great performance for an authenticated test.

EasyWP Authenticated Browsing – 500 Users

Our next test was the same as the previous test, except we increased the number of concurrent users by an order of magnitude. At this level, I expect things to fail when running the authenticated scenario. WordPress has awful performance without caching and you would need some seriously beefy infrastructure to absorb this number of authenticated requests.

Graph of EasyWP Authenticated – 500 Users Requests per Second
EasyWP Authenticated – 500 Users Requests per Second

As you can see things start off pretty great before going right off the rails. I’m not sure what is happening in the background at EasyWP, but I suspect that the service was scaling up while this test was running, otherwise we would have never made it to > 250 successful requests per second.

Graph of EasyWP Authenticated – 500 Users Response Time
EasyWP Authenticated – 500 Users Response Time

After the initial shock from the traffic volume, the median response time settled in at around 200ms. The average is much higher though, meaning we have some pretty serious outliers. These will likely show up in the response time distribution graph.

Graph of EasyWP Authenticated – 500 Users Response Time Distribution
EasyWP Authenticated – 500 Users Response Time Distribution

Finally we take a look at the response time distribution. Not entirely terrible given the amount of traffic we were throwing at EasyWP. 99% of requests completed in ~6.4 seconds. The outliers here were BIG though which skewed the average response time data.

EasyWP Unauthenticated Browsing – 200 Users

Next we move on to unauthenticated front end browsing. This is the use case that most manged WordPress hosting platforms optimize for (because 99% of traffic falls into this category). Let’s see how EasyWP does.

Graph of EasyWP Unauthenticated – 200 Users Requests per Second
EasyWP Unauthenticated – 200 Users Requests per Second

As expected for the unauthenticated baseline scenario, EasyWP did well. We settled in at around ~160 requests per seconds with very few errors throughout the entire duration of the test.

Graph of EasyWP Unauthenticated – 200 Users Response Time
EasyWP Unauthenticated – 200 Users Response Time

The response time graph looks solid. The median stayed at right around 190ms which is to be expected on a test with load generators spread across the globe. The average is a bit higher, so we probably had some outliers. Lets look at the distribution to see.

Graph of EasyWP Unauthenticated – 200 Users Response Time Distribution
EasyWP Unauthenticated – 200 Users Response Time Distribution

99% of requests finished in 820ms, with the outlier at 100% being 12 seconds. I honestly expected the 99% number to be a bit lower, but it’s still in a fine range for a load test of this size.

EasyWP Unauthenticated Browsing – 2000 Users

Finally we take a look at what happens to performance on EasyWP when we increase the size of the load test by an order of magnitude. Some hosts handle this test fine, others struggle. So let’s get to it.

Graph of EasyWP Unauthenticated – 2000 Users Requests Per Second
EasyWP Unauthenticated – 2000 Users Requests Per Second

EasyWP performed well in this test (ignore the spike, that was a reporting problem). We settled in at around ~1400 requests per second with very few errors. If you do the math, that’s 120M requests per day.

Graph of EasyWP Unauthenticated – 2000 Users Response Time
EasyWP Unauthenticated – 2000 Users Response Time

The performance on the 2000 user test was very similar to the 200 user test, which is great for EasyWP. Normally an order of magnitude increase in users would see a decrease in performance, but EasyWP handled it well.

Graph of EasyWP Unauthenticated – 2000 Users Response Time Distribution
EasyWP Unauthenticated – 2000 Users Response Time Distribution

The response time distribution was also strikingly similar to the 200 user test. The only change is that the 100% outlier is a lot higher, which is to be expected when working at a higher scale.

EasyWP Performance Conclusions

In general the performance of EasyWP was solid, especially considering how cost-effective it is. Is it the fastest WordPress host that we’ve tested? No. But it is in the top 2-3 for managed hosts in the performance per dollar category. EasyWP is definitely worth giving a try.

Hummingbird Cache for WordPress Performance Review

The Hummingbird Cache plugin is one of many different caching plugins available in the WordPress ecosystem. Enabling it will increase your site’s performance significantly, but by how much? In this review we’re going to use Kernl’s WordPress Load Testing tool to push our Hummingbird Cache WordPress installation to it’s limits.

HummingBird cache pushed to limits

Test System Setup

As with most of our cache reviews, we used a pretty standard PHP-FPM + Nginx setup.

  • DigitalOcean $5/month 1vCPU 1GB RAM machine
  • Ubuntu 20.04
  • PHP (FPM) 7.4
  • Nginx 1.18
  • MariaDB 10.3
  • Content – For this test I imported the contents of my personal blog and used it for testing.

The test system was located in San Francisco, CA, USA. Load test virtual users were located in New York, NY, USA along with some of the high volume tests spreading virtual users around Europe.

How did we test Hummingbird Cache?

To test the Hummingbird WordPress caching plugin ran 3 different load tests with Kernl WordPress Load Testing.

  1. Baseline – This is a 200 concurrent user test for 60 minutes with no caching enabled.
  2. Cache Enabled – The same test as the baseline run, but with caching enabled. This is the “apples to apples” comparison.
  3. Cache++ – After the “apples to apples” comparison, we pumped up the concurrent users to 400 to see how well the plugin would respond.

Baseline Load Test

The baseline load test is just the bare WordPress setup with no plugins enabled and the base TwentyTwenty theme. As expected performance isn’t great but it isn’t terrible either.

Hummingbird Cache - Baseline Request Throughput
Hummingbird Cache – Baseline Request Throughput

You can see from the throughput chart that the base WordPress installation with no caching enabled settled in at around 34 requests/s. Not too shabby, but what was the quality of those requests?

Hummingbird Cache - Baseline Response Times
Hummingbird Cache – Baseline Response Times

The average and median response times tell a story steady degradation of the user experience before finally settling at just shy of 5 seconds. If I were a reader of that blog, I would be extremely turned off by waiting for 5 seconds just to have the page load start.

Hummingbird Cache - Baseline Response Time Distribution
Hummingbird Cache – Baseline Response Time Distribution

The response time distribution is pretty awful here. 50% of requests finished in under 5s, and 99% of requests finished in under 5.5s. In most load tests we like to see the P50 number be a lot lower than the P99 number. In a perfect world they’re both really low, but that doesn’t happen in most cases.

Cache Enabled Load Test

Our next test was the same as the baseline test, but with HummingBird cache enabled. We went with all the default options making no changes to the settings.

Hummingbird Cache – Cache Enabled Request Throughput
Hummingbird Cache – Cache Enabled Request Throughput

As expected of a caching plugin, throughput goes up a lot and settles in at around 175 requests/second with zero errors. This is a nearly 6x improvement in throughput. But what about the response times? How did this look to the end user?

Hummingbird Cache – Cache Enabled Response Times
Hummingbird Cache – Cache Enabled Response Times

The response time results are extremely promising. The average response time was around 95ms and the median was around 75ms. Most performance best-practices hope for your site to respond within 100ms, which this plugin easily accomplishes even under incredibly heavy load. Let’s break the response time numbers down further.

Hummingbird Cache – Cache Enabled Response Time Distribution
Hummingbird Cache – Cache Enabled Response Time Distribution

For 50% of our users, the response time was 75ms or less. For 99% of our users, response time was less than 160ms. These are great numbers and just what I would expect from a WordPress caching plugin.

Cache++ Load Test

Now that we’ve established that Hummingbird Cache does a great job under (somewhat) normal circumstances, lets see what happens if we double the traffic (400 concurrent users -vs- 200 concurrent users).

Hummingbird Cache++ – Cache Enabled Request Throughput
Hummingbird Cache++ – Cache Enabled Request Throughput

Event at 2X the number of users, we don’t see any errors and we see the throughput settling at about 325 requests per second. If you do the math, this is about 28 million requests a day. On a $5 box. Obviously this test is fairly naive, but it does show that the plugin can handle some serious traffic when needed.

Hummingbird Cache++ – Cache Enabled Response Times
Hummingbird Cache++ – Cache Enabled Response Times

The best part about this test is that even with incredible load the response time average and median are still below 180ms. Most users visiting a site would be extremely happy with response times in that range.

Hummingbird Cache++ – Cache Enabled Response Time Distribution
Hummingbird Cache++ – Cache Enabled Response Time Distribution

The response time distribution still tells a reasonable story. 50% of users see responses in 150ms or less and 99% see responses in 375ms or less. Solid performance from the Hummingbird Cache team.

Hummingbird Cache Conclusions

If you need a caching plugin for your site, Hummingbird Cache is a solid choice. It performs well, was easy to install, and was generally low friction. I found the user interface to be a little immature, but that doesn’t change the excellent performance we saw during our tests.

Want to run your own load tests? Sign up for Kernl!

Load Testing the CloudWays Managed WordPress Service

At the beginning of December Kernl launched the closed beta of it’s WordPress load testing service. As a test to shake out any bugs we’ve decided to run a blog series load testing managed WordPress services. Today we’re going to talk about the CloudWays managed WordPress service. In particular, CloudWays deployed to Vultr.

How is the platform judged?

Cloudways will be tested using 3 different load tests:

  • The Baseline – This is a 200 concurrent user, 10 minutes, 2 user/s ramp up test from San Francisco. This test is used to verify the test configuration and to make sure that Cloudways doesn’t go belly-up before we get started 🙂
  • The Sustained Traffic Test – This test is for 2000 concurrent users, ramps up at 2 users/s, from San Francisco, for 2 hours. The sustained traffic test represents a realistic load for a high traffic site.
  • The Traffic Spike Test – This test is intentionally brutal. It simulates 20000 concurrent users, ramps up at 10 users/s, from San Francisco, for 1 hour. It represents the sort of traffic pattern you might see if a Twitter celebrity shared a link to your blog.

What CloudWays plan was used?

For this test we used the lowest tier plan available while hosting on Vultr. The cost of the plan is $11 / month and includes full SSH access to the box that CloudWays deploys your WordPress instance on.

CloudWays $11 / month plan hosted on Vultr
Selected CloudWays Plan

Where does the traffic originate?

The traffic for this load test originates in Digital Ocean‘s SFO2 (San Francisco) data center. The Vultr server lives in their Seattle data center.

The baseline load test

200 concurrent users, 2 users / s ramp up, 10 minutes, SFO

The baseline WordPress load test that we did with CloudWays is used to test configuration. CloudWays performed well on this test. You can see from the request graph that we settled in at around 25 requests / second.

CloudWays BaseLine Load Test Requests
CloudWays Baseline Test – Requests per second

The failure graph for the baseline load test was empty, which is generally expected for the baseline test.

CloudWays BaseLine Load Test Failures
CloudWays Baseline Test – Failures

Finally the request distribution graph for the baseline test. You can see that 99% of the requests finished in ~200ms. There was at least one outlier at the ~5000ms mark, but this isn’t uncommon for load tests.

CloudWays BaseLine Load Test Response Time Distribution
CloudWays Baseline – Response Time Distribution

The sustained heavy traffic load test

2000 concurrent users, 2 users / s ramp up, 2 hours, SFO

The sustained traffic load test represents what a WordPress site with high readership might look like day over day.  The CloudWays setup responded quite well for the hardware that it was on.

CloudWays Sustained Heavy Traffic Load Test - Requests
CloudWays Sustained Load Test – Requests

You can see that performance was great for the first 10% of the test. The CloudWays setup had no trouble handling the load thrown at it. However once we started getting to around 85 requests / second the hardware had trouble keeping up with the request volume. You can see from the choppy behavior of request graph that the Varnish server which sits in front of WordPress was starting to get overwhelmed by the request volume. Considering that this particular CloudWays plan was deployed to a low-level Vultr VM, this performance isn’t bad at all.

The failure graph was a little disappointing, but not unexpected knowing the hardware that we tested on. It is very likely that if we tested on a more robust underlying Vultr box we would have had much better results. You can see that failures increased in a fairly linear rate through the whole load test.

CloudWays Sustained Heavy Traffic Load Test - Failures
CloudWays Sustained Load Test – Failures

The final graph for this test is the response distribution graph. This graph shows you for a given percentage of requests how many milliseconds they took to complete. In this case CloudWays didn’t perform great, but once again I’ll point to the fact that the underlying Vultr hardware isn’t that robust.

CloudWays Sustained Heavy Traffic Load Test - Response Time Distribution
CloudWays Sustained Load Test – Response time distribution

From the graph you can see that 99% of requests completed in ~95 seconds. Yes, you read that correctly. You can interpret this graph as you like but taking the other graphs into consideration you can see that Varnish and the underlying Vultr hardware were completely overwhelmed. Knowing that makes this a little less terrible. We suspect that a smaller load test (maybe 750 concurrent users?) might yield a far better response time distribution. Once a server becomes overwhelmed the response time distribution tends to go in a bad direction.

The traffic spike load test

20000 concurrent users, 10 users / s ramp up, 1 hour, SFO

Given what we know about the sustained traffic load test your expectations for how this test went are probably spot on. CloudWays did as good as can be expected with how the underlying hardware is allocated, but you would likely need to upgrade to a much larger plan to handle this level of traffic. We ended up stopping this load test after about 30 minutes due to the increased failure rate.

CloudWays Traffic Spike Load Test - Requests
CloudWays Traffic Spike Load Test – Requests per Second

The requests per second never really leveled out. It isn’t clear what the underlying reason was for the uneven level at the top of the graph. Regardless, top-end performance was similar to the sustained traffic test.

The failure chart looks as we expected it to. After a certain point we start to see increased failure rates. They continue up and to the right in a mostly linear fashion.

CloudWays Traffic Spike Load Test - Failures
CloudWays Traffic Spike Load Test – Requests per Second

The response time distribution is really bad for this test.

CloudWays Traffic Spike Load Test - Response Time Distribution
CloudWays Traffic Spike Load Test – Response Time Distribution

As you can see 80% of the requests finished in < 50s which means that 20% of the requests took longer than that. The 99% mark was only reached after > 200s, at which point the user is likely long gone.

Conclusions

For $11 / month the CloudWays managed WordPress installation did a great job, but there are better performers out there in the same price range (GoDaddy for instance). For the sake of this review which only looks at raw performance, CloudWays probably isn’t the best choice. But if you’re looking for good-enough performance with extreme flexibility then you would be hard pressed to find a better provider.

Want to run load tests against your own WordPress sites? Sign up for Kernl now!