WordPress Plugin Performance Implications: Wordfence

In the world of WordPress there are a lot of different plugins you can install to extend its functionality. One of the most popular plugins is Wordfence, a security plugin developed by the fine folks over at Defiant.

Test the performance implications of your own WordPress plugins with Kernl WordPress Load Testing.

Most WordPress developers understand that the more plugins you add to your site the slower it goes, but exactly how much slower isn’t something that is often measured. More importantly, nobody has bothered to figure out the performance implications of installing many of the most popular plugins available to WordPress users today.

This all changes now with this series of blog posts exploring the performance implications of different WordPress plugins. In this series of posts we’ll test each plugin in isolation and then with caching enabled using Kernl’s WordPress load testing service. First up, is Wordfence.

Machine Setup

The test machine for these tests was a $5 Digital Ocean droplet in their SFO2 (San Francisco) data center. The machine has 1GB of RAM, 1 vCPU, and a 25GB hard disk.

The software installed on the machine is as follows:

  • Ubuntu 19.04 with all updates installed.
  • Nginx 1.16.1
  • PHP-FPM 7.3
  • MariaDB 10.3

For tests that required caching we used our favorite caching plugin, W3 Total Cache, with memcached as the data store.

The theme that was used was TwentyTwenty with no modifications and the content was an export of this blog.

All traffic was generated out of Digital Ocean’s NYC3 (New York City) data center.

Test Methodology

A series of 4 tests were run to test the performance implications of installing Wordfence. They were:

Each test was with 200 concurrent users for 1 hour.

Maximum Requests per Second

The first metric that we looked at was the maximum requests per second that the site was able to handle under each situation outlined above.

Wordfence - Maximum Requests per Second
Wordfence – Maximum Requests per Second

As you can see the difference between having Wordfence enabled and having Wordfence disabled is huge. The non-cached site with no plugins enabled handled a maximum of 26 requests per second. With Wordfence enabled it could only handle 12.

More interesting though was that with caching enabled (W3 Total Cache) the site could handle 165 requests per second, but only 67 requests per second with Wordfence enabled.

These results were so surprising that we ran the tests twice. The results were the same (within 1%-2%) each time.

First Error Occurrence

The next metric we looked at was when did the first error occur during our load tests.

Wordfence - First Error req/s
Wordfence – First Error req/s

Once again we see that adding Wordfence took a pretty serious toll on our performance. For the baseline test (no plugins, no cache) we see our first error at around 26 requests / second. In the Wordfence test with no cache we saw it at 12 requests / second.

With our caching enabled test, we never saw any errors when Wordfence wasn’t enabled. However with Wordfence enabled we saw our first error at 56 requests / second.

Average Response Time

Now that we’ve looked at server capacity metrics (max requests, first error), let’s take a look at how your end user experience changes with Wordfence enabled.

Wordfence - Avg. Response Time (milliseconds)
Wordfence – Avg. Response Time (milliseconds)

These results were fairly interesting and not at all what was expected. The average response time for the baseline test was around 4.6 seconds, while the average response with Wordfence enabled was about 2.5 seconds. So why might that be? Looking through the data it appears that the baseline test had far more successful requests and that with Wordfence enabled the requests seemed to fail faster. In short, baseline test had higher throughput but slower response time. The Wordfence test had lower throughput and faster response time.

Turning our attention to the caching scenarios, we can see that enabling Wordfence is extremely problematic at scale. With caching enabled, our baseline test had an average response time of 146ms. With caching + Wordfence that time ballooned over 10x to 1874ms.

99th Percentile Response Time

Our final metric was looking at the 99th percentile response times for our different scenarios. What does that mean? It answers the question “How long does it take for 99% of requests to finish?”. This intentionally leaves out the last 1% because those are often outliers.

Wordfence - 99th Percentile Response Time
Wordfence – 99th Percentile Response Time

As you can see above, the un-cached 99th percentile hovers right around 5s when Wordfence is enabled or disabled. Since this is the 99th percentile such a high response time isn’t too surprising in both scenarios.

The more interesting scenario is when caching is enabled. For WordPress with only a caching plugin running, the 99th percentile is 370ms. That means 99% of all requests finished in 370ms. With Wordfence also enabled (Wordfence + caching), that number jumped to 2400ms. That’s a ~7X increase.

Conclusions

From these tests we came to a few conclusions:

  1. Caching does not solve all of your performance problems. If your cache strategy relies on requests making it all the way through to WordPress, then you are very likely to still take a performance hit from other plugins. Things like Cloudflare, Varnish, Litespeed, or Nginx can help alleviate this problem.
  2. Running Wordfence is expensive from a performance standpoint. The data suggests that by simply enabling Wordfence you lose about 50% of your maximum capacity and can increase your response time between 2x-7x.
  3. Wordfence is still worth it for a lot of people. If you’ve operated a WordPress site for any length of time you know how often they get attacked. Wordfence does a great job of reducing attack surface area and making it hard for people to attack you.

Test the performance implications of your own WordPress plugins with Kernl WordPress Load Testing.

Introducing Team Management and the Kernl Unlimited Plan

One of the most requested features that Kernl gets is Team Management. Team management can take on a lot of different forms, but for Kernl’s purposes it is the ability to share access via the Kernl interface with multiple people with different sets of credentials.

Team Management

Starting today you can sign up for the Kernl Unlimited plan (see below) and start managing your team. Team management allows you to share your Kernl account with authorized users. When a user is part of your team, they have a slightly restricted view of Kernl but otherwise see the same things you do. What’s restricted?

  • Continuous Deployment – Adding connected Git repositories is the responsibility of the admin. Non-admin users don’t need to see this information.
  • Billing – Once again, this is an admin responsibility. The admin is able to change plans, add credit cards, etc.
  • Team Management – Only the admin can add and remove users from their account.

Aside from that everything is the same for admin and non-admin users.

Kernl Team Management

The Unlimited Plan & Pricing Changes

In addition to team management we are also reducing the number of plans that Kernl has. Prior to this change we had 5 different plans (solo, agency, enterprise, huge, massive) which seemed like a bit much. To simplify Kernl’s pricing structure and give our customers better options we went down to 3 plans:

  • Solo – This $9 per month plan is our base plan. Most customers start here and then grow into larger plans as their needs change.
  • Agency – The $39 agency plan is essentially our former “Enterprise” plan but with increased limits for plugins, themes, licenses, load testing, and feature flags.
  • Unlimited – This new $79 unlimited plan allows you to have unlimited plugins, themes, licenses, feature flags and very high limited on load testing (50K concurrent users). This plan also includes team management so you can more easily control access to your Kernl account.

As always, current Kernl customers are grandfathered into the plan that they are currently on.

Kernl Pricing

What’s Next?

Now that Kernl has a more robust upper-level offering we hope to start rolling more features into it. For example:

  • In the near future we’ll include Kernl Analytics with Unlimited plan.
  • We’re also planning on rolling out limited team management into the Agency plan.

What’s New With Kernl – October 2019

October held a lot of great work for Kernl! Lots of refactoring, test coverage increases, and a new UI for managing Git deployments. Let’s dive in!

Features

  • Git Continuous Deployment UI Changes – Kernl now has a new and improved interface for adding continuous deployment to your projects.
  • Installed Plugin List Filter – You can now easily filter the installed plugin data in Kernl Analytics. Prior to this change the best you could do is column sort.

Infrastructure, Bugs, & Other

  • Fetching BitBucket repos with more that 20 branches failed. This has been resolved.
  • The plugin/theme webhook build log didn’t handle the case where a user had just signed up and didn’t have any build requests yet.
  • Easy Digital Downloads license validation was failing on newer versions of EDD. Kernl has been updated to support the newer version API as well as the older API.
  • The Git integration has been refactored and had additional test coverage added.
  • Kernl was upgraded to Node 12.13.x. With this upgrade we get async stack traces and a non-trivial decrease in response time (~50ms).

Blog Posts

WordPress Database Performance Showdown: MySQL vs MariaDB vs Percona

WordPress requires that you use a MySQL compatible database for its database backend. It used to be that you could confidently choose MySQL and go on with life, but in 2019 the choice isn’t quite that simple. With the MySQL, MariaDB, and Percona as the attractive options, how do you know which to choose?

Choosing a database isn’t always about performance, but for the sake of this article it will be 🙂

For this series of tests we tested database performance out-of-the-box with no special tweaks. It is quite possible that a professional database administrator could make each database run more performant, but most people hosting WordPress aren’t DBAs. With regards to caching, none was enabled. We wanted to test database performance, not cache performance.

MariaDB Logo

What was tested?

The WordPress host machine was a DigitalOcean CPU Optimized droplet with 16 vCPUs (dedicated hyper-threads) and 32 GB of RAM. This monster of a machine was chosen so that we could be certain that Nginx + PHP-FPM weren’t the cause of any bottlenecks. A minor tweak to the PHP-FPM config allowed for full use of all 16 vCPUs.

The database host machines were DigitalOcean CPU Optimized droplets with 4 vCPUs (dedicated hyper-threads) and 8GB of RAM. CPU Optimized droplets were chosen because we didn’t want our tests to be at the mercy of shared CPU resources.

Each deployment was in DigitalOcean’s SFO2 region with the WordPress server communicating with the database over the internal private network. The traffic producing nodes were deployed in DigitalOcean’s NYC3 data center and communicated via the public internet.

The software versions are as follows:

  • Percona Server for MySQL 8.0.16-7
  • MariaDB 10.4.8-GA
  • MySQL 8.0.18

What tests were run?

Want to test your site’s performance? Sign up for Kernl WordPress Load Testing.

For each database that was tested, we ran a load test with the following parameters:

  • 500 concurrent users
  • 2 req/s ramp up
  • 30 minute duration

The goal here wasn’t to bring the database to it’s knees but instead see how it performed under sustained heavy load, but not so heavy that it falls over.

Left: WordPress (Nginx + PHP-FPM), Right: MySQL

MariaDB WordPress Performance

During the MySQL acquisition of Oracle in 2009 there was a lot of concern amongst the core developers that Oracle would eventually close off MySQL to the world (similar to Oracle’s business model). Before that could take place, a GPL fork of MySQL was created named MariaDB.

MariaDB is open source and in active development. But how does it stand up to our WordPress load tests? Let’s find out.

First we’ll take a look at the requests and failures per second.

~379 req/s with periodic errors

As you can see from the results above the performance scales up very well over time eventually peaking at ~379 req/s. We see periodic database-related errors (the vCPUs on the database were almost completely saturated) but nothing too crazy. Next, let’s see how the response times look.

Median response time of ~190ms.

The median response time of WordPress when backed by MariaDB under heavy load stays remarkably consistent. You can see that the average is slowly creeping up by the end of the test, but nothing that would be noticeable by customers yet.

99% of request finish in 500ms

The response time distribution is a little more interesting than the median response time. 90% of all requests finish in under 300ms and 99% of all requests finish in less than 500ms. Overall, the performance of MariaDB out of the box with no configuration is quite good.

Percona WordPress Performance

Another open-source fork of MySQL, Percona was started in 2006 and has been steadily delivering value-added features and enterprise support on top of MySQL for more than 12 years. With all that accumulated experience, how does Percona measure up?

~320 requests/s, ~3 errors/s

Overall the Percona WordPress performance was pretty good. Not quite a good as MariaDB but nothing to scoff at. The one interesting thing was that the error rate was consistent across most of the test once the request volume passed ~200 requests/s. Lets see if the response time graph adds to the story.

~450ms Median Response Time

The data here is actually pretty interesting. The response time for Percona was comparable to MariaDB right up until the time it started getting consistent errors. After that, it more than doubled and stayed that way for the rest of the test.

99% of requests finished in ~550ms

The response time distribution tells similar story to the response time chart. The difference between the 50th percentile and the 99th percentile shows that performance was very consistent across the entire test, it just wasn’t as good as MariaDB.

MySQL WordPress Performance

Last but not least (well….) in our test is MySQL. It’s been around since 1994 and is now owned by Oracle. In it’s latest releases there has been lot of great features such as window functions and more JSON features to compete with PostgreSQL. So let’s see how it did.

295 requests/s, 3-4 errors/s

MySQL had a similar trajectory to Percona: It did pretty good across the board with a small but consistent series of errors after it started to go past 200 req/s. Also note that it never achieved higher than 300 req/s, which both MariaDB and Percona did.

500ms Median Response Time

The shape of the MySQL response time graph looks nearly identical to Percona, just about 50ms slower once the vCPUs started to get saturated. The detailed look in the response time distribution tells a similar story.

99% of requests finished in 875ms

The response time distribution is where you can see Percona and MySQL diverge a bit. At the 99th percentile MySQL is returning at 875ms, while Percona was more like 550ms. In general, the distribution matches what one would expect given how the response time graph looks

Conclusions

The out of the box numbers for MariaDB make it look like the clear winner here. And compared to both Percona and MySQL it is in the raw performance department. This isn’t to say that you couldn’t tune Percona or MySQL to out-perform MariaDB, but only that you get more performance with zero configuration changes.

Req / sError / sMed. Response Time99% Dist.
MariaDB379~1190ms500ms
Percona3203450ms550ms
MySQL2953500ms875ms

WordPress Performance on DigitalOcean Managed MySQL

Want to performance test your own WordPress installation? Try Kernl’s WordPress Load Testing service.

The database is the most important part of any WordPress site. If you aren’t adept at managing MySQL, it can be a serious risk for you and your clients. One way to de-risk this portion of your WordPress site is by going with a managed MySQL offering. There are ton of different options out there, but for this post we’re going to look at Digital Ocean’s Managed MySQL.

DigitalOcean DB SaaS background.

Why Use DigitalOcean Managed MySQL?

There are a lot of reasons to use Digital Ocean’s Managed MySQL (or some other offering) including:

  • Simple Setup – With a few simple clicks you can have a high quality MySQL cluster set up.
  • Horizontal Scalability – Site growing fast? You can spin up read-only nodes to help scale out read operations.
  • Automated Daily Backups – No need to set up your own backups. DigitalOcean takes care of it for you.
  • Automated Failover – If for some reason your primary database node fails, you will automatically fail over to your warm spare.
  • Security – MySQL best practices for security are automatically followed by DigitalOcean. In addition to that your database is isolated to your private network so that outside requests can’t access it by default.

Baseline Performance Test

To get things started let’s do a baseline performance test where the MySQL database is on the same box as the Nginx server. The server configuration is as follows:

  • 1GB RAM
  • 3 vCPU
  • Nginx
  • PHP-FPM 7.2
  • MariaDB 10.1
  • No caching

A test of 400 concurrent users was run using Kernl’s WordPress Load Testing service. Without caching the results are predictably bad, but that is to be expected.

Graph of requests per second for self-hosted MySQL with WordPress
Things fall apart at around ~70 req/sec

For completeness, let’s also take a look at the response time distribution.

Response time distribution for self hosted MySQL with WordPress
99% of requests finish in ~2 seconds

Given how many failing requests we had and zero caching, the ones that did manage to get through didn’t perform too poorly. 99% in 2 seconds or less.

DigitalOcean Managed MySQL

When setting up a managed MySQL database on DigitalOcean you get the option to select the underlying hardware that powers it. For this blog post we went with the minimum possible configuration (1GB RAM, 1vCPU).

Knowing that fewer resources were allocated, it was expected that performance would actually be a bit worse on the dedicated MySQL instance. These expectations were confirmed with the load test.

Graph of requests per second for DigitalOcean Managed MySQL with WordPress
Things go sideways at ~50 req/s.

All things considered, 50 req/s against a WordPress site with no caching isn’t all that bad. Once we hit that level, the database charts were showing 100% CPU saturation and thats when we started to see the error rate increase. Now let’s look at the response time distribution.

Response time distribution for managed MySQL with WordPress
99% of requests finish in less than 2.6 seconds

As expected performance is slightly worse here due to fewer resources on the dedicated MySQL machine, but not in a huge way.

Cost & Why Managed MySQL is Important

For our test we used the most basic configuration that DigitalOcean offers:

  • 1 GB RAM
  • 1 vCPU
  • 10 GB Hard Disk
  • No failover
  • Automatic backups

All of this and not having to manage MySQL for $15/month. Not too bad, but to really be getting your money’s worth (automatic failover) you need to spend more money.

  • 2 GB RAM
  • 1 vCPU
  • 25 GB Hard Disk
  • 1 Standby node
  • Automatic failover
  • Automatic backups

With automatic failover you start to reduce the risk to yourself and your customers a lot. However, this level of availability isn’t free 🙂 DigitalOcean doesn’t support failover on their cheapest node, so you have to upgrade to the next level ($30/month). After that, the failover node costs you an additional $20/month. Now we’re looking at $50/month for a managed MySQL cluster with automated failover.

If you run a WordPress based business on DigitalOcean, $50/month buys you a lot of peace of mind. It’s also easy to scale up as traffic increases and the cost is competitive in the landscape of managed MySQL. If you happen to be an excellent DBA though, you can definitely manage your own cluster at a reduced monetary cost, but increased time cost.

Conclusions

Managing your own database cluster is probably fine if cost is a problem and you are good at it. In general though, if you can afford it we recommend using a managed MySQL host so you can push the complexity of operating a highly available MySQL cluster on to someone else.

Want to performance test your own WordPress installation? Try Kernl’s WordPress Load Testing service.

What’s New With Kernl – September 2019

I hope that everyone has had a great September! There has been some interesting changes with Kernl this month, so let’s dive in and discuss them.

Features

  • Credit Card Add Changes – For the past 3 years Kernl has had a simple credit card form (card number, expiration, etc). In an effort to help reduce fraud we’ve migrated out card addition system to use Stripe’s Checkout.js. This gives us advanced fraud protection provided by Stripe and the added benefit that your card details never hit Kernl’s servers.
  • Continuous Deployment Setup – Kernl is in the middle of a big UI change for continuous deployment setup. The first step was making the ‘connect your Gitlab/GitHub/BitBucket account’ piece a lot prettier. We’ve now made it a 3 panel selection, with nice big logos and fewer ways to get confused. Check it out by going to the “Continuous Deployment” section in Kernl.
  • Repository Pruning – Kernl will now prune repositories from your repository list that you no longer have access to or no longer have connected with Kernl. The exception here is that we’ll keep a reference to repositories that you have connected to plugins or themes.
  • Invoice Payment Failure Notifications – We recently had an issue where a customer wasn’t notified that their payment failed (card had expired) 🙁 This should be resolved now as we’ve integrated with Stripe web hooks and immediately catch the event and send a message to the account owner. We’ve also added a notification preference so that you can silence these emails.

Other

  • All packages have been upgraded on all Kernl servers.
  • Marketing pages are now cached in Redis.
  • Fixed a customer reported bug in plugin_update_check.php that threw a warning in newer versions of PHP. Upgrade to version 1.2.2 to get the fix.
  • Feature flag setup wizard had weird behavior if the customer didn’t have any plugins or themes. You can now manually name your feature flag product in the wizard if you so choose.
  • Fixed a few bugs in the feature flag UI where adding/removing individual users from a flag would occasionally fail.
  • The buttons to manually manage deploy keys have been moved to the bottom of the continuous deployment page. They also come with a disclaimer now.

Blog Posts

Beta testing WordPress plugin features with Kernl WordPress Feature Flags

Beta testing WordPress plugin features with Kernl WordPress Feature Flags

Imagine that you are a WordPress plugin author. Maybe you work for an agency or maybe you work for yourself. The point is that you have clients or customers that have come to expect a high level of quality from your work. Great job!

Now imagine that you have been working on a complicated but much sought after feature for your plugin. Complicated means risk. Complicated means that your hard-earned reputation for high-quality could on the chopping-block if you aren’t careful.

Don’t get stressed out like this guy.

So what do you do? You need to release the feature but you also want to limit the risk you take by doing so. You can do few different things, but we’re here to talk about only one of them:

  • “Dog-food” the new feature for as long as you can.
  • Unit and integration tests can help test the validity of your code
  • Run a limited beta program with feature flags

Using Kernl Feature Flags to Manage a Beta Program

First of all, what is a feature flag anyway? A feature flag is a way of toggling sections of code on or off without doing a code deployment. At an extremely high level, it looks like this:

<?php
  if ($flagActive) {
     // enable feature
  }
?>

Thats it. In practice implementing feature flags is a bit more difficult, but it doesn’t have to be much more complicated. For instance, using Kernl’s feature flag library:

<?php
  $kff = new kernl\WPFeatureFlags($kernlFeatureFlagProductKey, $userIdentifier);
  if ($kff->active('MY_FLAG')) {
    // enable feature
  }
?>

The beauty here is that you can toggle this block of code on or off for individual users, all users, or a percentage of users without ever needing to deploy anymore code.

So. Easy.

And just like that ‘jack.slingerland@gmail.com’ gained access to the beta. No code deploys. No complicated anything. Just search, click, save. And what did this look like for the end user?

Before

No Feature Flag Footer Bar

After

Feature Flag Footer Bar Beta Program!

Now this is obviously a contrived example, but it has all the building blocks you need to do far more complicated integrations.

A Beta Program

With the building blocks above you can see it isn’t hard to manage a beta program. Simply ship an update to your plugin wrapped in a feature flag. After that, add specific people to it as you see fit. As you get more feedback, continue to ship updates behind the flag. Once you are confident that the code looks good, you can remove the flag completely!

Now let’s dive in to the actual plugin code.

Tutorial

Adding feature flags to your plugin is a 3 step process.

  1. Create the product & flag in Kernl.
  2. Install the feature flag library via Composer
  3. Add the code to your plugin.

Step 1 is accomplished by signing up for Kernl and adding the product and flag. The product is just a container for all of your flags. That way your plugin can have a bunch of different flags in it but still be logically grouped together. For this case, let’s create a product called ‘Kernl Footer Flag Blog Post’ and a flag named ‘New Footer Beta’.

Step 2 is as simple as installing the Kernl WordPress feature flag library via Composer:

composer require kernl/wp-feature-flags

For step 3, you add some code to your plugin. Let’s take a look at it, followed by discussion of what it’s all doing.

<?php

require __DIR__ . '/vendor/autoload.php';
add_action('init', 'kernl_footer_flags_init');
function kernl_footer_flags_init() {
    add_action('wp_footer', 'beta_program_footer');
}

function beta_program_footer() {
    if (is_user_logged_in()) {
        $currentUser = wp_get_current_user();
        $userIdentifier = $currentUser->user_email;
    } else {
        $userIdentifier = 'Unauthenticated Users';
    }
    $kernlFeatureFlagProductKey = '5d835a2830cbb568728b9bd4';
    $cacheTimeInMinutes = 1;
    $defaultToActive = false;
    $kff = new kernl\WPFeatureFlags(
        $kernlFeatureFlagProductKey,
        $userIdentifier,
        $defaultToActive,
        $cacheTimeInMinutes
    );
    if ($kff->active('NEW_FOOTER_BETA')):
    ?>
        <style>
            .kernl-footer-flag-bar {
                background-color: #5d5d5d;
                font-size: 12px;
                text-align: right;
                color: white;
            }
        </style>
        <div class="kernl-footer-flag-bar">
            You are in the Kernl Footer Flags beta program (<?= $userIdentifier ?>)
        </div>
    <?php endif;
}
?>

That’s the bulk of the plugin code (I excluded the Kernl updater for brevity). Let’s break it down.

Init Action and Composer Auto Load

require __DIR__ . '/vendor/autoload.php';
add_action('init', 'kernl_footer_flags_init');
function kernl_footer_flags_init() {
    add_action('wp_footer', 'beta_program_footer');
}

In this code we auto load the composer dependency that we have. You can see it here. After that we define an ‘init’ action to add our beta program footer. The reason we have this in the ‘init’ action is because if we do it too early we won’t be able to fetch the current user (which we use to create a unique identifier).

User Identifier Creation

function beta_program_footer() {
    if (is_user_logged_in()) {
        $currentUser = wp_get_current_user();
        $userIdentifier = $currentUser->user_email;
    } else {
        $userIdentifier = 'Unauthenticated Users';
    }

After we define our action we create the function that it calls. The first thing we want to do is create a user identifier. The user identifier is used by Kernl to determine what feature flags that the identified user should see. In our case, if the person isn’t logged in they get lumped into an ‘Unauthenticated Users’ bucket. In the person is logged in, we identify them by their email. It is by this identifier that you will enable/disable features when using the ‘individual’ type feature flag. If we were using the ‘enable for a percentage of users’ type feature flag, simply assigning each user a unique identifier (maybe a UUID) would suffice.

Instantiate the Kernl Feature Flag Library

$kernlFeatureFlagProductKey = '5d835a2830cbb568728b9bd4';
$cacheTimeInMinutes = 1;
$defaultToActive = false;
$kff = new kernl\WPFeatureFlags(
    $kernlFeatureFlagProductKey,
    $userIdentifier,
    $defaultToActive,
    $cacheTimeInMinutes
);

The $kernlFeatureFlagProductKey is generated by Kernl when you create your product. The $cacheTimeInMinutes variable is so you can configure how long the flags will be cached in WordPress. If this is set to ‘0’, then the library will call Kernl every time the page loads. In general you probably don’t want this. And last but not least, $defaultToActive is a boolean variable. If true, the ‘active()` function will return true if Kernl can’t find a flag.

Product Key

Active Check

if ($kff->active('NEW_FOOTER_BETA')):

The final piece is simply checking if this feature flag is active for this user.

Putting it all Together

If you want to see the source code for this plugin and/or install it yourself:

Conclusions

Kernl WordPress Feature flags are an incredibly powerful tool for safely releasing your code into the wild. In a situation like WordPress plugins where production is often not a machine that you are responsible for, being able to quickly toggle code on/off without a deploy is of paramount important.

What’s New With Kernl – July & August 2019

I hope everyone has had a great summer (or winter if you are in the southern hemisphere)! Over the past 2 months we’ve gotten a lot great stuff done, so let’s dive in.

Features & Infrastructure

  • Kernl has upgraded from Mongo 3.x to Mongo 4.2 with WiredTiger. We get improved performance and the latest features with this change.
  • Our Redis instance has moved to DigitalOcean along with the rest of our infrastructure. Prior to this we were using managed host that lived outside the NYC3 data center. Response times decreased ~50ms or so with this change.
  • The high traffic plugin and theme update check endpoints had a round of performance tuning done. Resource consumption was lowered in meaningful way.
  • 🔥🔥🔥Kernl Analytics Active Plugins🔥🔥🔥- Kernl Analytics will now track what plugins are most active across your install-base. You only need to be signed up for Kernl Analytics and use the latest plugin_update_check.php file to get this new feature.
  • Our MongoDB database has been moved to DigitalOcean NYC3. Prior to this we were hosting on Compose.io outside of the datacenter. Originally this decision was made because managing databases is tough, but the quality of hosting at Compose has gone done significantly in the past year. With this change we shaved ~150ms off of response times.
  • Some tweaks were made to our network firewalls to make them easier to manage. Thanks DigitalOcean!
🔥🔥🔥Kernl Analytics Active Plugins 🔥🔥🔥

Bug Fixes

  • Load testing machines would fail to provision if the API call to DigitalOcean failed. This has been resolved.
  • The load testing master node would fail to start sometimes if secondary nodes failed to connect. The threshold for starting tests has been lowered so that this won’t happen anymore.
  • If a credit card expires and the invoice payment fails, the account isn’t marked as paid when a new card is added and a successful payment happens.
  • When switching between themes/plugins in Kernl Analytics the domain data wasn’t reloading with the new plugin/theme.
  • Thanks to a customer bug report and code snippet, the plugin_update_check.php no longer sends headers before the license check fails.

Blog Posts

That’s it for July and August!

Load Testing Vultr’s New High Frequency Servers with WordPress

Run your own WordPress Load Tests with Kernl!

Back in June Vultr announced the general availability of their new “High Frequency” servers. Reading through the announcement I was intrigued by their claim of using “3+GHZ processors and blazing fast NVMe storage!” and immediately wondered what WordPress performance would look like versus their regular Cloud Compute offering.

What was tested?

To get a better idea of the performance characteristics of the Vultr High Frequency Compute servers, I ran several types of load tests with all Vultr servers sitting in their Silicon Valley data center:

  • 200 concurrent users from New York City (Digital Ocean NYC3)
  • 2000 concurrent users from New York City (Digital Ocean NYC3) & London (Digital Ocean LON1)

And I tested the following scenarios:

All servers with the exception of the “performance tuned” ones were built using the pre-built WordPress image that Vultr offers with no caching or performance tuning done. The “performance tuned” servers were created using Nginx, PHP-FPM, MariaDB, Memcached, and W3 Total Cache.

Results

Starting with the Vultr High Frequency boxes, lets break down performance.

$6 32GB NVMe

The $6 Vultr High Frequency(HF) machine performed well against the roughly equivalent $5 Vultr Cloud Compute(CC) instance. If we take a look at the requests/failures per second charts you can see that the HF machine out-performed the CC instance by quite a lot.

$6 32GB NVMe Vultr High Frequency Requests/Failures
$5 25GB SSD Vultr Cloud Compute Requests/Failures

As you can see the HF server was able to handle roughly twice as many requests per second as the CC server without any errors. Let’s check out the average and median response times as well as the response time distribution.

$62 32GB NVMe Vultr High Frequency Response Time
$6 32GB NVMe Vultr High Frequency Response Time
$62 32GB NVMe Vultr High Frequency Response Time Distribution
$6 32GB NVMe Vultr High Frequency Response Time Distribution

You can see that as the test progressed response times got steadily worse until leveling out at around 2.5s. The response time distribution shows that 99% of requests finished in 3s or under, with the 100th percentile outlier coming in at just under 6s. This is honestly pretty good performance for no tuning at all.

$5 25GB SSD Vultr Cloud Compute Response Time
$5 25GB SSD Vultr Cloud Compute Response Time
$5 25GB SSD Vultr Cloud Compute Response Time Distribution
$5 25GB SSD Vultr Cloud Compute Response Time Distribution

The response times for the $5 cloud compute instance were about 2 seconds worse than the high frequency instance, with the average coming in at around 4.5. The response time distribution was worse from a performance perspective, but better from a consistency perspective with the spread between the 50th percentile and the 100th percentile only being ~1.3s.

At a glance, it looks like the Vultr High Frequency server out-performs the cloud compute in a significant way for only $1 extra. However, our next tests show that it might not be so simple.

$24 128GB NVMe (Performance Tuned)

The next set of tests that were performed were 2000 concurrent users on a performance tuned setup. The traffic originated from a cluster of servers in both New York and London, with the host server being in Vultr’s Silicon Valley data center.

First, lets take a look at the requests per second handled by the $24 High Frequency server and the $20 Cloud Compute server.

$24 128GB NVMe Vultr High Frequency Requests

Now, 2000 concurrent users is a lot. I think that the HF machine did quite well overall, but I confess that I did expect a bit more out of it. It topped out at around 1250 requests per second, with errors hovering in the ~175 requests per second range. If left to run for another 30 minutes I think it would have reached closer to 1350 requests per second.

$20 80GB SSD Vultr Cloud Compute Requests

This is where things start to get interesting. The Vultr marketing team bills the high frequency machines as heads and shoulders above the regular cloud compute machines. Maybe it is for some workloads. But for this test it doesn’t seem much better at all. Looking at both graphs you can see that the HF machine topped out a bit higher, and had fewer errors, but not enough to warrant the extra cash (in my opinion). Lets see what sort of story the response times tell.

$24 128GB NVMe Vultr High Frequency Response Time
$24 128GB NVMe Vultr High Frequency Response Time Distribution

On the performance tuned box you can see that the requests that did complete successfully were all quite fast. The average response time was between 100ms and 200ms throughout the entire test. The response time distribution was solid as well, with 99% of requests finishing in under 500ms. Not bad for 2000 concurrent users.

Now let’s take a look at how the comparable cloud compute server did.

$20 80GB SSD Vultr Cloud Compute Response Time
$20 80GB SSD Vultr Cloud Compute Response Time Distribution

For the Vultr Cloud Compute instance response times also hovered between 100ms-200ms, although just a hair higher than the high frequency servers. The response time distribution was excellent as well, with 99% of requests coming in under 500ms. The main difference here is that there was a 100th percentile outlier here at the high frequency server didn’t have.

Thoughts & Conclusions

From what I can tell, the Vultr High Frequency servers are better than the Cloud Compute servers, but maybe only for certain types of workloads. You can see in the $5/$6 test that they easily out-performed the cloud compute instances, but in the more expensive $20/$24 test the results weren’t so cut and dry.

If we look at it from a requests per dollar standpoint, the cloud compute instance is actually a better value for this type of workload.

Requests per Dollar

I suggest running extensive load tests against the Vultr High Frequency machines before making any effort to switch over. They might be better for you. They might perform the same for more money.

In conclusion: ¯\_(ツ)_/¯

Run your own WordPress Load Tests with Kernl!

What’s New With Kernl – June 2019

Hello everyone! Kernl got some pretty interesting updates this month, so let’s dive in!

Features

Average & Median Response Times for Load Tests – Kernl’s WordPress load testing services has always had this data available, we just never surfaced it in a way that was easy to consume. You’ll now see a new tab when you run a load test with information about average and median response times during the course of your test.

Progress Bar for Load Test Initialization – When you create a load test you will now see a progress bar that indicates how far in the process of instantiating your infrastructure we are. Prior to this update it was easy to think that the process had stalled out or broke.

Bug Fixes & Other

  • Load test site ownership verification would fail to certain types of HTML minification. This has been resolve.
  • Our internal analytics services has been refactors to be a singleton. This reduced each app server’s memory footprint by 5MB.
  • The public license validate endpoint now has a 10 second cache on it. This helps us deal with any sort of burst traffic from license validations.
  • Home page Javascript has been minified and concatenated into a single script to decrease load time.
  • All packages have been update on our servers.

Thats it for June! If you have any questions reach out to jack@kernl.us.