Introducing Team Management and the Kernl Unlimited Plan

One of the most requested features that Kernl gets is Team Management. Team management can take on a lot of different forms, but for Kernl’s purposes it is the ability to share access via the Kernl interface with multiple people with different sets of credentials.

Team Management

Starting today you can sign up for the Kernl Unlimited plan (see below) and start managing your team. Team management allows you to share your Kernl account with authorized users. When a user is part of your team, they have a slightly restricted view of Kernl but otherwise see the same things you do. What’s restricted?

  • Continuous Deployment – Adding connected Git repositories is the responsibility of the admin. Non-admin users don’t need to see this information.
  • Billing – Once again, this is an admin responsibility. The admin is able to change plans, add credit cards, etc.
  • Team Management – Only the admin can add and remove users from their account.

Aside from that everything is the same for admin and non-admin users.

Kernl Team Management

The Unlimited Plan & Pricing Changes

In addition to team management we are also reducing the number of plans that Kernl has. Prior to this change we had 5 different plans (solo, agency, enterprise, huge, massive) which seemed like a bit much. To simplify Kernl’s pricing structure and give our customers better options we went down to 3 plans:

  • Solo – This $9 per month plan is our base plan. Most customers start here and then grow into larger plans as their needs change.
  • Agency – The $39 agency plan is essentially our former “Enterprise” plan but with increased limits for plugins, themes, licenses, load testing, and feature flags.
  • Unlimited – This new $79 unlimited plan allows you to have unlimited plugins, themes, licenses, feature flags and very high limited on load testing (50K concurrent users). This plan also includes team management so you can more easily control access to your Kernl account.

As always, current Kernl customers are grandfathered into the plan that they are currently on.

Kernl Pricing

What’s Next?

Now that Kernl has a more robust upper-level offering we hope to start rolling more features into it. For example:

  • In the near future we’ll include Kernl Analytics with Unlimited plan.
  • We’re also planning on rolling out limited team management into the Agency plan.

What’s New With Kernl – October 2019

October held a lot of great work for Kernl! Lots of refactoring, test coverage increases, and a new UI for managing Git deployments. Let’s dive in!

Features

  • Git Continuous Deployment UI Changes – Kernl now has a new and improved interface for adding continuous deployment to your projects.
  • Installed Plugin List Filter – You can now easily filter the installed plugin data in Kernl Analytics. Prior to this change the best you could do is column sort.

Infrastructure, Bugs, & Other

  • Fetching BitBucket repos with more that 20 branches failed. This has been resolved.
  • The plugin/theme webhook build log didn’t handle the case where a user had just signed up and didn’t have any build requests yet.
  • Easy Digital Downloads license validation was failing on newer versions of EDD. Kernl has been updated to support the newer version API as well as the older API.
  • The Git integration has been refactored and had additional test coverage added.
  • Kernl was upgraded to Node 12.13.x. With this upgrade we get async stack traces and a non-trivial decrease in response time (~50ms).

Blog Posts

WordPress Performance on DigitalOcean Managed MySQL

Want to performance test your own WordPress installation? Try Kernl’s WordPress Load Testing service.

The database is the most important part of any WordPress site. If you aren’t adept at managing MySQL, it can be a serious risk for you and your clients. One way to de-risk this portion of your WordPress site is by going with a managed MySQL offering. There are ton of different options out there, but for this post we’re going to look at Digital Ocean’s Managed MySQL.

DigitalOcean DB SaaS background.

Why Use DigitalOcean Managed MySQL?

There are a lot of reasons to use Digital Ocean’s Managed MySQL (or some other offering) including:

  • Simple Setup – With a few simple clicks you can have a high quality MySQL cluster set up.
  • Horizontal Scalability – Site growing fast? You can spin up read-only nodes to help scale out read operations.
  • Automated Daily Backups – No need to set up your own backups. DigitalOcean takes care of it for you.
  • Automated Failover – If for some reason your primary database node fails, you will automatically fail over to your warm spare.
  • Security – MySQL best practices for security are automatically followed by DigitalOcean. In addition to that your database is isolated to your private network so that outside requests can’t access it by default.

Baseline Performance Test

To get things started let’s do a baseline performance test where the MySQL database is on the same box as the Nginx server. The server configuration is as follows:

  • 1GB RAM
  • 3 vCPU
  • Nginx
  • PHP-FPM 7.2
  • MariaDB 10.1
  • No caching

A test of 400 concurrent users was run using Kernl’s WordPress Load Testing service. Without caching the results are predictably bad, but that is to be expected.

Graph of requests per second for self-hosted MySQL with WordPress
Things fall apart at around ~70 req/sec

For completeness, let’s also take a look at the response time distribution.

Response time distribution for self hosted MySQL with WordPress
99% of requests finish in ~2 seconds

Given how many failing requests we had and zero caching, the ones that did manage to get through didn’t perform too poorly. 99% in 2 seconds or less.

DigitalOcean Managed MySQL

When setting up a managed MySQL database on DigitalOcean you get the option to select the underlying hardware that powers it. For this blog post we went with the minimum possible configuration (1GB RAM, 1vCPU).

Knowing that fewer resources were allocated, it was expected that performance would actually be a bit worse on the dedicated MySQL instance. These expectations were confirmed with the load test.

Graph of requests per second for DigitalOcean Managed MySQL with WordPress
Things go sideways at ~50 req/s.

All things considered, 50 req/s against a WordPress site with no caching isn’t all that bad. Once we hit that level, the database charts were showing 100% CPU saturation and thats when we started to see the error rate increase. Now let’s look at the response time distribution.

Response time distribution for managed MySQL with WordPress
99% of requests finish in less than 2.6 seconds

As expected performance is slightly worse here due to fewer resources on the dedicated MySQL machine, but not in a huge way.

Cost & Why Managed MySQL is Important

For our test we used the most basic configuration that DigitalOcean offers:

  • 1 GB RAM
  • 1 vCPU
  • 10 GB Hard Disk
  • No failover
  • Automatic backups

All of this and not having to manage MySQL for $15/month. Not too bad, but to really be getting your money’s worth (automatic failover) you need to spend more money.

  • 2 GB RAM
  • 1 vCPU
  • 25 GB Hard Disk
  • 1 Standby node
  • Automatic failover
  • Automatic backups

With automatic failover you start to reduce the risk to yourself and your customers a lot. However, this level of availability isn’t free 🙂 DigitalOcean doesn’t support failover on their cheapest node, so you have to upgrade to the next level ($30/month). After that, the failover node costs you an additional $20/month. Now we’re looking at $50/month for a managed MySQL cluster with automated failover.

If you run a WordPress based business on DigitalOcean, $50/month buys you a lot of peace of mind. It’s also easy to scale up as traffic increases and the cost is competitive in the landscape of managed MySQL. If you happen to be an excellent DBA though, you can definitely manage your own cluster at a reduced monetary cost, but increased time cost.

Conclusions

Managing your own database cluster is probably fine if cost is a problem and you are good at it. In general though, if you can afford it we recommend using a managed MySQL host so you can push the complexity of operating a highly available MySQL cluster on to someone else.

Want to performance test your own WordPress installation? Try Kernl’s WordPress Load Testing service.

What’s New With Kernl – September 2019

I hope that everyone has had a great September! There has been some interesting changes with Kernl this month, so let’s dive in and discuss them.

Features

  • Credit Card Add Changes – For the past 3 years Kernl has had a simple credit card form (card number, expiration, etc). In an effort to help reduce fraud we’ve migrated out card addition system to use Stripe’s Checkout.js. This gives us advanced fraud protection provided by Stripe and the added benefit that your card details never hit Kernl’s servers.
  • Continuous Deployment Setup – Kernl is in the middle of a big UI change for continuous deployment setup. The first step was making the ‘connect your Gitlab/GitHub/BitBucket account’ piece a lot prettier. We’ve now made it a 3 panel selection, with nice big logos and fewer ways to get confused. Check it out by going to the “Continuous Deployment” section in Kernl.
  • Repository Pruning – Kernl will now prune repositories from your repository list that you no longer have access to or no longer have connected with Kernl. The exception here is that we’ll keep a reference to repositories that you have connected to plugins or themes.
  • Invoice Payment Failure Notifications – We recently had an issue where a customer wasn’t notified that their payment failed (card had expired) 🙁 This should be resolved now as we’ve integrated with Stripe web hooks and immediately catch the event and send a message to the account owner. We’ve also added a notification preference so that you can silence these emails.

Other

  • All packages have been upgraded on all Kernl servers.
  • Marketing pages are now cached in Redis.
  • Fixed a customer reported bug in plugin_update_check.php that threw a warning in newer versions of PHP. Upgrade to version 1.2.2 to get the fix.
  • Feature flag setup wizard had weird behavior if the customer didn’t have any plugins or themes. You can now manually name your feature flag product in the wizard if you so choose.
  • Fixed a few bugs in the feature flag UI where adding/removing individual users from a flag would occasionally fail.
  • The buttons to manually manage deploy keys have been moved to the bottom of the continuous deployment page. They also come with a disclaimer now.

Blog Posts

Beta testing WordPress plugin features with Kernl WordPress Feature Flags

Beta testing WordPress plugin features with Kernl WordPress Feature Flags

Imagine that you are a WordPress plugin author. Maybe you work for an agency or maybe you work for yourself. The point is that you have clients or customers that have come to expect a high level of quality from your work. Great job!

Now imagine that you have been working on a complicated but much sought after feature for your plugin. Complicated means risk. Complicated means that your hard-earned reputation for high-quality could on the chopping-block if you aren’t careful.

Don’t get stressed out like this guy.

So what do you do? You need to release the feature but you also want to limit the risk you take by doing so. You can do few different things, but we’re here to talk about only one of them:

  • “Dog-food” the new feature for as long as you can.
  • Unit and integration tests can help test the validity of your code
  • Run a limited beta program with feature flags

Using Kernl Feature Flags to Manage a Beta Program

First of all, what is a feature flag anyway? A feature flag is a way of toggling sections of code on or off without doing a code deployment. At an extremely high level, it looks like this:

<?php
  if ($flagActive) {
     // enable feature
  }
?>

Thats it. In practice implementing feature flags is a bit more difficult, but it doesn’t have to be much more complicated. For instance, using Kernl’s feature flag library:

<?php
  $kff = new kernl\WPFeatureFlags($kernlFeatureFlagProductKey, $userIdentifier);
  if ($kff->active('MY_FLAG')) {
    // enable feature
  }
?>

The beauty here is that you can toggle this block of code on or off for individual users, all users, or a percentage of users without ever needing to deploy anymore code.

So. Easy.

And just like that ‘jack.slingerland@gmail.com’ gained access to the beta. No code deploys. No complicated anything. Just search, click, save. And what did this look like for the end user?

Before

No Feature Flag Footer Bar

After

Feature Flag Footer Bar Beta Program!

Now this is obviously a contrived example, but it has all the building blocks you need to do far more complicated integrations.

A Beta Program

With the building blocks above you can see it isn’t hard to manage a beta program. Simply ship an update to your plugin wrapped in a feature flag. After that, add specific people to it as you see fit. As you get more feedback, continue to ship updates behind the flag. Once you are confident that the code looks good, you can remove the flag completely!

Now let’s dive in to the actual plugin code.

Tutorial

Adding feature flags to your plugin is a 3 step process.

  1. Create the product & flag in Kernl.
  2. Install the feature flag library via Composer
  3. Add the code to your plugin.

Step 1 is accomplished by signing up for Kernl and adding the product and flag. The product is just a container for all of your flags. That way your plugin can have a bunch of different flags in it but still be logically grouped together. For this case, let’s create a product called ‘Kernl Footer Flag Blog Post’ and a flag named ‘New Footer Beta’.

Step 2 is as simple as installing the Kernl WordPress feature flag library via Composer:

composer require kernl/wp-feature-flags

For step 3, you add some code to your plugin. Let’s take a look at it, followed by discussion of what it’s all doing.

<?php

require __DIR__ . '/vendor/autoload.php';
add_action('init', 'kernl_footer_flags_init');
function kernl_footer_flags_init() {
    add_action('wp_footer', 'beta_program_footer');
}

function beta_program_footer() {
    if (is_user_logged_in()) {
        $currentUser = wp_get_current_user();
        $userIdentifier = $currentUser->user_email;
    } else {
        $userIdentifier = 'Unauthenticated Users';
    }
    $kernlFeatureFlagProductKey = '5d835a2830cbb568728b9bd4';
    $cacheTimeInMinutes = 1;
    $defaultToActive = false;
    $kff = new kernl\WPFeatureFlags(
        $kernlFeatureFlagProductKey,
        $userIdentifier,
        $defaultToActive,
        $cacheTimeInMinutes
    );
    if ($kff->active('NEW_FOOTER_BETA')):
    ?>
        <style>
            .kernl-footer-flag-bar {
                background-color: #5d5d5d;
                font-size: 12px;
                text-align: right;
                color: white;
            }
        </style>
        <div class="kernl-footer-flag-bar">
            You are in the Kernl Footer Flags beta program (<?= $userIdentifier ?>)
        </div>
    <?php endif;
}
?>

That’s the bulk of the plugin code (I excluded the Kernl updater for brevity). Let’s break it down.

Init Action and Composer Auto Load

require __DIR__ . '/vendor/autoload.php';
add_action('init', 'kernl_footer_flags_init');
function kernl_footer_flags_init() {
    add_action('wp_footer', 'beta_program_footer');
}

In this code we auto load the composer dependency that we have. You can see it here. After that we define an ‘init’ action to add our beta program footer. The reason we have this in the ‘init’ action is because if we do it too early we won’t be able to fetch the current user (which we use to create a unique identifier).

User Identifier Creation

function beta_program_footer() {
    if (is_user_logged_in()) {
        $currentUser = wp_get_current_user();
        $userIdentifier = $currentUser->user_email;
    } else {
        $userIdentifier = 'Unauthenticated Users';
    }

After we define our action we create the function that it calls. The first thing we want to do is create a user identifier. The user identifier is used by Kernl to determine what feature flags that the identified user should see. In our case, if the person isn’t logged in they get lumped into an ‘Unauthenticated Users’ bucket. In the person is logged in, we identify them by their email. It is by this identifier that you will enable/disable features when using the ‘individual’ type feature flag. If we were using the ‘enable for a percentage of users’ type feature flag, simply assigning each user a unique identifier (maybe a UUID) would suffice.

Instantiate the Kernl Feature Flag Library

$kernlFeatureFlagProductKey = '5d835a2830cbb568728b9bd4';
$cacheTimeInMinutes = 1;
$defaultToActive = false;
$kff = new kernl\WPFeatureFlags(
    $kernlFeatureFlagProductKey,
    $userIdentifier,
    $defaultToActive,
    $cacheTimeInMinutes
);

The $kernlFeatureFlagProductKey is generated by Kernl when you create your product. The $cacheTimeInMinutes variable is so you can configure how long the flags will be cached in WordPress. If this is set to ‘0’, then the library will call Kernl every time the page loads. In general you probably don’t want this. And last but not least, $defaultToActive is a boolean variable. If true, the ‘active()` function will return true if Kernl can’t find a flag.

Product Key

Active Check

if ($kff->active('NEW_FOOTER_BETA')):

The final piece is simply checking if this feature flag is active for this user.

Putting it all Together

If you want to see the source code for this plugin and/or install it yourself:

Conclusions

Kernl WordPress Feature flags are an incredibly powerful tool for safely releasing your code into the wild. In a situation like WordPress plugins where production is often not a machine that you are responsible for, being able to quickly toggle code on/off without a deploy is of paramount important.

Load Testing Vultr’s New High Frequency Servers with WordPress

Run your own WordPress Load Tests with Kernl!

Back in June Vultr announced the general availability of their new “High Frequency” servers. Reading through the announcement I was intrigued by their claim of using “3+GHZ processors and blazing fast NVMe storage!” and immediately wondered what WordPress performance would look like versus their regular Cloud Compute offering.

What was tested?

To get a better idea of the performance characteristics of the Vultr High Frequency Compute servers, I ran several types of load tests with all Vultr servers sitting in their Silicon Valley data center:

  • 200 concurrent users from New York City (Digital Ocean NYC3)
  • 2000 concurrent users from New York City (Digital Ocean NYC3) & London (Digital Ocean LON1)

And I tested the following scenarios:

All servers with the exception of the “performance tuned” ones were built using the pre-built WordPress image that Vultr offers with no caching or performance tuning done. The “performance tuned” servers were created using Nginx, PHP-FPM, MariaDB, Memcached, and W3 Total Cache.

Results

Starting with the Vultr High Frequency boxes, lets break down performance.

$6 32GB NVMe

The $6 Vultr High Frequency(HF) machine performed well against the roughly equivalent $5 Vultr Cloud Compute(CC) instance. If we take a look at the requests/failures per second charts you can see that the HF machine out-performed the CC instance by quite a lot.

$6 32GB NVMe Vultr High Frequency Requests/Failures
$5 25GB SSD Vultr Cloud Compute Requests/Failures

As you can see the HF server was able to handle roughly twice as many requests per second as the CC server without any errors. Let’s check out the average and median response times as well as the response time distribution.

$62 32GB NVMe Vultr High Frequency Response Time
$6 32GB NVMe Vultr High Frequency Response Time
$62 32GB NVMe Vultr High Frequency Response Time Distribution
$6 32GB NVMe Vultr High Frequency Response Time Distribution

You can see that as the test progressed response times got steadily worse until leveling out at around 2.5s. The response time distribution shows that 99% of requests finished in 3s or under, with the 100th percentile outlier coming in at just under 6s. This is honestly pretty good performance for no tuning at all.

$5 25GB SSD Vultr Cloud Compute Response Time
$5 25GB SSD Vultr Cloud Compute Response Time
$5 25GB SSD Vultr Cloud Compute Response Time Distribution
$5 25GB SSD Vultr Cloud Compute Response Time Distribution

The response times for the $5 cloud compute instance were about 2 seconds worse than the high frequency instance, with the average coming in at around 4.5. The response time distribution was worse from a performance perspective, but better from a consistency perspective with the spread between the 50th percentile and the 100th percentile only being ~1.3s.

At a glance, it looks like the Vultr High Frequency server out-performs the cloud compute in a significant way for only $1 extra. However, our next tests show that it might not be so simple.

$24 128GB NVMe (Performance Tuned)

The next set of tests that were performed were 2000 concurrent users on a performance tuned setup. The traffic originated from a cluster of servers in both New York and London, with the host server being in Vultr’s Silicon Valley data center.

First, lets take a look at the requests per second handled by the $24 High Frequency server and the $20 Cloud Compute server.

$24 128GB NVMe Vultr High Frequency Requests

Now, 2000 concurrent users is a lot. I think that the HF machine did quite well overall, but I confess that I did expect a bit more out of it. It topped out at around 1250 requests per second, with errors hovering in the ~175 requests per second range. If left to run for another 30 minutes I think it would have reached closer to 1350 requests per second.

$20 80GB SSD Vultr Cloud Compute Requests

This is where things start to get interesting. The Vultr marketing team bills the high frequency machines as heads and shoulders above the regular cloud compute machines. Maybe it is for some workloads. But for this test it doesn’t seem much better at all. Looking at both graphs you can see that the HF machine topped out a bit higher, and had fewer errors, but not enough to warrant the extra cash (in my opinion). Lets see what sort of story the response times tell.

$24 128GB NVMe Vultr High Frequency Response Time
$24 128GB NVMe Vultr High Frequency Response Time Distribution

On the performance tuned box you can see that the requests that did complete successfully were all quite fast. The average response time was between 100ms and 200ms throughout the entire test. The response time distribution was solid as well, with 99% of requests finishing in under 500ms. Not bad for 2000 concurrent users.

Now let’s take a look at how the comparable cloud compute server did.

$20 80GB SSD Vultr Cloud Compute Response Time
$20 80GB SSD Vultr Cloud Compute Response Time Distribution

For the Vultr Cloud Compute instance response times also hovered between 100ms-200ms, although just a hair higher than the high frequency servers. The response time distribution was excellent as well, with 99% of requests coming in under 500ms. The main difference here is that there was a 100th percentile outlier here at the high frequency server didn’t have.

Thoughts & Conclusions

From what I can tell, the Vultr High Frequency servers are better than the Cloud Compute servers, but maybe only for certain types of workloads. You can see in the $5/$6 test that they easily out-performed the cloud compute instances, but in the more expensive $20/$24 test the results weren’t so cut and dry.

If we look at it from a requests per dollar standpoint, the cloud compute instance is actually a better value for this type of workload.

Requests per Dollar

I suggest running extensive load tests against the Vultr High Frequency machines before making any effort to switch over. They might be better for you. They might perform the same for more money.

In conclusion: ¯\_(ツ)_/¯

Run your own WordPress Load Tests with Kernl!

What’s New With Kernl – June 2019

Hello everyone! Kernl got some pretty interesting updates this month, so let’s dive in!

Features

Average & Median Response Times for Load Tests – Kernl’s WordPress load testing services has always had this data available, we just never surfaced it in a way that was easy to consume. You’ll now see a new tab when you run a load test with information about average and median response times during the course of your test.

Progress Bar for Load Test Initialization – When you create a load test you will now see a progress bar that indicates how far in the process of instantiating your infrastructure we are. Prior to this update it was easy to think that the process had stalled out or broke.

Bug Fixes & Other

  • Load test site ownership verification would fail to certain types of HTML minification. This has been resolve.
  • Our internal analytics services has been refactors to be a singleton. This reduced each app server’s memory footprint by 5MB.
  • The public license validate endpoint now has a 10 second cache on it. This helps us deal with any sort of burst traffic from license validations.
  • Home page Javascript has been minified and concatenated into a single script to decrease load time.
  • All packages have been update on our servers.

Thats it for June! If you have any questions reach out to jack@kernl.us.

What’s New With Kernl – May 2019

There was a lots of work on Kernl’s WordPress Load Testing this month, so lets dive in and learn about it!

Features & Bug Fixes

  • Reliability Load Testing – Have you ever wondered how a WordPress host performs over an extended period of time? Kernl’s new reliability testing allows you to answer that question. You can now run low volume (25 user) load tests for up to 30 hours.
  • Load Test Sharing Page Improvements – You can now see a graph of the total number of failures when you share your load test results. See an example at https://kernl.us/wordpress-load-testing/results/553f746e7a1fa8663be442a9/333.
  • Download Full Load Test Data – You are now able to download all the data from your load test. This is the full, non-sampled, data set that Kernl produces during a load testing run.
  • Sampled Load Test Data – Kernl’s WordPress load testing service can create a lot of data, so for data sets over a certain size we now sample the data. This helps all of charts load faster and improve your experience.

That’s it for this month!

Digital Ocean 30 Hour WordPress Load Test for Reliability and Consistency

Perform your own 30 hour load tests with Kernl!

Over the past 5 months I’ve been writing a lot of different articles testing WordPress performance when under heavy load. One of the comments that I often receive is “Yes, but how reliable is the host over time?”. To determine that answer I made some changes to Kernl that would allow customers to do long duration tests against their providers with a steady load. Given my affinity for Digital Ocean, I figured that would be a great first host to test.

What Was Tested & How I Tested It

Digital Ocean has several data centers across the globe and I figured that I should test each of these data centers to see how reliable they were. For this test I ran a single load test against the following data centers for 30 hours with a 25 concurrent users:

  • New York City (NYC3)
  • Toronto (TOR1)
  • Bangalore (BLR1)
  • Frankfurt (FRA1)
  • London (LON1)
  • Singapore (SGP1)
  • Amsterdam (AMS3)

All requests were made from the Digital Ocean data center in San Francisco (SFO2). The target of each load test was a simple $5 / month droplet with the WordPress image from the Digital Ocean marketplace installed on it.

Results

The table below summarizes the results for all of the long duration load tests. Click the region to see more details of the load test.

Example Load Test Results Page
RegionRequestsFailuresFailure %Req/s Avg
NYC32.49M1390.005%23
TOR12.46M1260.005%22/23
BLR12.17M1150.005%20
FRA12.28M14050.06%21
LON12.33M13350.05%21
SGP12.29M980.004%21
AMS32.30M2280.009%21

As you can see from the results above Digital Ocean’s reliability is excellent across the entire testing period. Even the data centers with the highest error rate (Frankfurt, London) had an incredibly small error rate. I’m not going to add the response time distribution results here because they were uniformly excellent.

Anomalies

  • The Bangalore (BLR1) test averaged only 20 req / s. Even though the geographic distance is far, I expected the response times to go up but to have a similar throughput.
  • The Toronto (TOR1) load test averaged 22 req / s for 28 hours, then jumped up to 23 req / s for that last two hours of the test. Maybe a noisy neighbor went quiet?
  • Digital Ocean’s Frankfurt (FRA1) and London (LON1) data centers had an order of magnitude more errors than the other data centers I tested.

Conclusions

As a whole, Digital Ocean performed very well in all of their data centers with a moderate amount of sustained traffic to a WordPress instance. In the future I would like to try running all of these tests twice with a different origin for each test run. It’s also worth noting that I haven’t done this type of test on any other platform yet. I hope that as I test more providers I’ll find out whether or not Digital Ocean performed as well as I think it did.

Perform your own 30 hour load tests with Kernl!

Vultr Cloud Compute -vs- Dedicated -vs- Bare Metal WordPress Performance

Load test your own WordPress site with Kernl! Getting started is free!

In the world of cloud computing there are a lot of different options to choose from. Normally you only need to choose how big your instance will be (2 vCPUs or 4, 2GB RAM or 6), but some cloud compute providers are upping their game and providing an even wider array of options and instance types for you to choose from.

Vultr has 3 different types of compute instances:

  • Cloud Compute – You get your own virtual server, but it is sharing hardware resources with lots of friends. Noisy neighbors can definitely be a problem.
  • Dedicated – Dedicated servers, but virtualized. I (think) it is possible to run in to noisy neighbor problems in this situation.
  • Bare Metal – Dedicated servers and hardware. No hypervisor and no noisy neighbors taking up your resources.

In this article we’re going to see how a very basic WordPress install performs on the different types of Vultr compute instances. We’ll do so using Kernl’s WordPress Load Testing service.

The Test

As per usual with Kernl load tests I imported this blog’s content into each load testing environment. The load test skews extremely read heavy. If you have a site that is write heavy or a mix you may see different results.

Each test was performed for 1 hour with 2000 concurrent users generating load from London and New York to Vultr’s data center in New Jersey.

Configuration

For this test I used Vultr’s pre-built WordPress image with no caching. A lot of readers might say “But you can get much better performance using X or Y!”, and they would be right! But I’m not testing Apache vs Nginx performance, or W3 Total Cache vs WP Rocket, I’m testing Vultr hardware under load in a real world scenario. I simply want to know at the end of this article if Vultr Cloud Compute, Dedicated, or Bare Metal is better for WordPress hosting.

Test 1: Vultr Cloud Compute $10 / Month

The first test I performed was against the $10 per month Vultr Cloud Compute offering. As expected of a $10/month VPS performance wasn’t awesome, but it also wasn’t terrible.

All the red of red land

As you can see, lots of failed requests and only maintaining throughput of 16 req/s. Not unexpected with a single core and 1 GB of RAM. After all, I was throwing 2000 concurrent requests per second at the server. The response time distribution was similarly bad.

Bad, but could be a lot worse.

Overall, the results for the $10 VPS were as expected. This isn’t really an apples to apples comparison (we’ll get to that later), but I wanted to give you an idea of what basic VPS instance performance looks like.

Test 2: Vultr Cloud Compute $80 / Month

With this test we’re starting to get closer to the cost of bare metal and dedicated instances. This server had 6 CPUs and 16GB of RAM. Considerably more robust than the $10 server.

Lots of red, but also blue!!!

This graph tells a much different story than the previous test. Performance peaked at 169 req/s and then leveled off at 100 req/s. We still saw a lot of errors, but once again this isn’t unexpected. Honestly if you started to get this much traffic you would likely start breaking up WordPress into its components (file system, PHP + Nginx, MySQL) and start scaling horizontally.

Much Lower Response Time Distribution

The response time distribution was much better for this server as well. The upper end was just as bad as the cheaper box, but the 90% and below ranges were pretty solid for the amount of traffic that was being received.

Test 3: Vultr Bare Metal $120 / Month

The Vultr Bare Metal server was the instance I was most excited about testing. I’ve always had a soft spot for hardware and getting access to a bare metal server is pretty cool. For $120 per month (on sale, price will rise to $300/month eventually) you get 8 CPUs and 32GB of RAM. This is a pretty serious server.

Oooh, 200 req/s.

Lots of blue on this graph but also the expected amount of red. You can see that throwing 2 more non-virtual CPUs and 2X the RAM made a pretty big difference. We peaked at 200 req/s and then leveled out at 125 req/s. For reference that is 17.2 million requests per day.

🙁

The lower end of the response time distribution was solid, but the upper end wasn’t great at all. With all of those errors it isn’t surprising that this is the case.

Test 4: Vultr Dedicated $120 / Month

I honestly had a tough time figuring out why Vultr priced the bare metal and dedicated instances so close to each other. Dedicated is clearly inferior (far fewer CPUs and RAM) so why would anyone choose it? Anyway, let’s take a look at the graph.

💩💩💩💩💩

This test peaked at 100 req/s and then leveled off at around 70. I really would expect a lot better performance for this sort of money.

Also 💩, but not as much 💩.

Response time distribution was similar to the other boxes. With all the failures it tends to skew pretty hard in the wrong direction. I’m sure that there is a use case for these dedicated Vultr instances, but it definitely isn’t hosting a WordPress site.

Conclusions

With all of this data it was pretty easy to graph which of these is the best value.

Value was calculated by taking the cost per month and dividing it by the maximum number of requests. Based on the performance we saw above the Vultr Cloud Compute instances seem like your best value for WordPress hosting. For WordPress hosting it looks like Vultr Bare Metal and Dedicated instances aren’t a great choice. As mentioned above, there are likely use cases where they are a good choice though (maybe workloads that require very consistent performance).

As with all of these tests, your mileage may vary! I highly recommend that you run load tests on any new host that you use to get an idea of what sort of performance you can expect.

Load test your own WordPress site with Kernl! Getting started is free!