Run your own WordPress Load Tests with Kernl!
Back in June Vultr announced the general availability of their new “High Frequency” servers. Reading through the announcement I was intrigued by their claim of using “3+GHZ processors and blazing fast NVMe storage!” and immediately wondered what WordPress performance would look like versus their regular Cloud Compute offering.

What was tested?
To get a better idea of the performance characteristics of the Vultr High Frequency Compute servers, I ran several types of load tests with all Vultr servers sitting in their Silicon Valley data center:
- 200 concurrent users from New York City (Digital Ocean NYC3)
- 2000 concurrent users from New York City (Digital Ocean NYC3) & London (Digital Ocean LON1)
And I tested the following scenarios:
- Vultr High Frequency – $6 32GB NVMe
- Vultr High Frequency – $24 128GB NVMe
- Vultr High Frequency – $96 384GB NVMe
- Vultr High Frequency – $256 768GB NVMe
- Vultr High Frequency – $24 128GB NVMe (Performance Tuned)
- Vultr Cloud Compute – $5 25GB SSD
- Vultr Cloud Compute – $20 80GB SSD
- Vultr Cloud Compute – $20 80GB SSD (Performance Tuned)
All servers with the exception of the “performance tuned” ones were built using the pre-built WordPress image that Vultr offers with no caching or performance tuning done. The “performance tuned” servers were created using Nginx, PHP-FPM, MariaDB, Memcached, and W3 Total Cache.
Results
Starting with the Vultr High Frequency boxes, lets break down performance.
$6 32GB NVMe
The $6 Vultr High Frequency(HF) machine performed well against the roughly equivalent $5 Vultr Cloud Compute(CC) instance. If we take a look at the requests/failures per second charts you can see that the HF machine out-performed the CC instance by quite a lot.


As you can see the HF server was able to handle roughly twice as many requests per second as the CC server without any errors. Let’s check out the average and median response times as well as the response time distribution.


You can see that as the test progressed response times got steadily worse until leveling out at around 2.5s. The response time distribution shows that 99% of requests finished in 3s or under, with the 100th percentile outlier coming in at just under 6s. This is honestly pretty good performance for no tuning at all.


The response times for the $5 cloud compute instance were about 2 seconds worse than the high frequency instance, with the average coming in at around 4.5. The response time distribution was worse from a performance perspective, but better from a consistency perspective with the spread between the 50th percentile and the 100th percentile only being ~1.3s.
At a glance, it looks like the Vultr High Frequency server out-performs the cloud compute in a significant way for only $1 extra. However, our next tests show that it might not be so simple.
$24 128GB NVMe (Performance Tuned)
The next set of tests that were performed were 2000 concurrent users on a performance tuned setup. The traffic originated from a cluster of servers in both New York and London, with the host server being in Vultr’s Silicon Valley data center.
First, lets take a look at the requests per second handled by the $24 High Frequency server and the $20 Cloud Compute server.

Now, 2000 concurrent users is a lot. I think that the HF machine did quite well overall, but I confess that I did expect a bit more out of it. It topped out at around 1250 requests per second, with errors hovering in the ~175 requests per second range. If left to run for another 30 minutes I think it would have reached closer to 1350 requests per second.

This is where things start to get interesting. The Vultr marketing team bills the high frequency machines as heads and shoulders above the regular cloud compute machines. Maybe it is for some workloads. But for this test it doesn’t seem much better at all. Looking at both graphs you can see that the HF machine topped out a bit higher, and had fewer errors, but not enough to warrant the extra cash (in my opinion). Lets see what sort of story the response times tell.


On the performance tuned box you can see that the requests that did complete successfully were all quite fast. The average response time was between 100ms and 200ms throughout the entire test. The response time distribution was solid as well, with 99% of requests finishing in under 500ms. Not bad for 2000 concurrent users.
Now let’s take a look at how the comparable cloud compute server did.


For the Vultr Cloud Compute instance response times also hovered between 100ms-200ms, although just a hair higher than the high frequency servers. The response time distribution was excellent as well, with 99% of requests coming in under 500ms. The main difference here is that there was a 100th percentile outlier here at the high frequency server didn’t have.
Thoughts & Conclusions
From what I can tell, the Vultr High Frequency servers are better than the Cloud Compute servers, but maybe only for certain types of workloads. You can see in the $5/$6 test that they easily out-performed the cloud compute instances, but in the more expensive $20/$24 test the results weren’t so cut and dry.
If we look at it from a requests per dollar standpoint, the cloud compute instance is actually a better value for this type of workload.

I suggest running extensive load tests against the Vultr High Frequency machines before making any effort to switch over. They might be better for you. They might perform the same for more money.
In conclusion: ¯\_(ツ)_/¯
Run your own WordPress Load Tests with Kernl!