Adventures in Scalable WordPress Hosting: Part 2

Interested in testing your WordPress scalability? Check out the Kernl WordPress Load Testing beta program!

In part 1 of this series I explored scaling WordPress using WP Super Cache and by throwing more expensive hardware at the problem. In part 2 of this series we’ll go on adventure in horizontal scalability using load balancers, NFS, Memcached, and an externally hosted MySQL.

The Plan

To horizontally scale any app is an exercise in breaking things apart as much as possible. In the case of WordPress there are a few shared components that I wanted to break up:

  • File System – The file system is the most problematic part of scaling WordPress. Unless you change how WordPress stores plugins, themes, media, and other things you need to have a shared file system that all nodes in your cluster can access. There are likely some other solutions here, but this one provides a lot of flexibility.
  • MySQL – In many WordPress installs MySQL lives on the same machine as WordPress. For a horizontally scaled cluster this doesn’t work so we need a MySQL that is external.
  • Memcached – It was brought to my attention that during part 1 of this series using WP Super Cache to generate static pages was sort of cheating. In the spirit of making this harder for myself I introduced W3 Total Cache instead and will be using an external Memcached instance as the shared cache.

Now that the basic why and what is out of the way lets talk about the how. I’m a huge fan of Digital Ocean. I use them for everything except file storage so I’m going to use them for this WordPress cluster as well. Here’s how its going down:

  1. Create a droplet that will act as the file system for our cluster. Using NFS all droplets in the cluster will be able to mount it and use it for WordPress. I’m also going to use this for Memcached since NFS doesn’t take up many resources.
  2. Create a base droplet that has Nginx and PHP7.2-FPM installed on it. There is a little bit of boilerplate configuration here, but in general the install is typical. The only change to the Nginx configuration where I set the root directory to be the NFS mount. Use this base droplet to configure WordPress database settings.
  3. Use Compose.io create a MySQL database. I wanted something that was configured well that I didn’t have to think about. Totally worth the $27 / month.
  4. Once the above are done take a snapshot of the base droplet and use it to create more droplets. If all goes well you shouldn’t need to do any configuration.
  5. Using Digital Ocean’s load balancer service add your droplets to the load balancer.
  6. Voila! Thats it.
Ugly architecture diagram

No Cache Smoke Test

200 users, 10 minutes, 2 users/sec ramp up, from London

As with every load test that I do, the first test is always just to shake out any bugs in the load test itself. For this test I didn’t have any caching enabled and only a single app server behind the load balancer. It was effectively the same as the first load test I did during part 1 of this blog series.

As you can see from the graph below, performance was what we would expect from the setup that I used. We settled in to 21 requests / second with no errors.

As Expected.

The response time distribution wasn’t very great. 90% of requests finished in under 5 seconds, but thats still a very long time. Generally if I saw this response time distribution I would think that its time to add caching or scale up/out.

Not bad. Not great.

So. Many. Failures.

2000 users, 120 minutes, 2 users/sec ramp up, from London

The next test I decided to run was the sustained heavy load test. This is generally where I start to see failures from managed WordPress hosting providers. Given that I didn’t add any more app servers to the load balancer and had no caching things went as poorly as you would expect.

All the failures of failure land.

Everything was fine up until ~25 req/s and then the wheels fell off. The response time distribution was bad too. No surprises here.

50% of requests in 5 seconds, 100% in…33 seconds ūüôĀ

Looks like its time to scale.

Horizontal Scalability

2000 users, 120 minutes, 2 users/sec ramp up, from London

Before adding Memcached to the setup I wanted to see how it scaled without it. That means adding more hardware. For this test I added four more application servers (Nginx + PHP) to the load balancer and ran the test again.

Linear Growth

As you can see from the request/failure graph we experience roughly linear growth in our maximum requests/second. Given we originally maxed out at ~20 req/s on one machine, maxing out at ~100 req/s with five machines seems like exactly the sort of result that I would expect to see. The response time distribution also started to look better:

Not perfect, but better.

Obviously a 90% score of 4 seconds isn’t awesome, but it is a lot better than the previous test. I did make a tiny tweak to the load balancer configuration that may have helped though. I decided to use the ‘least connections’ options instead of ’round robin’. ‘Least connections’ tells the load balancer to send traffic to the app server with the least number of active connections. This should help with dog piling on a server with a few slower connections.

Given the results above we can safely assume linear growth tied to the number of app servers that we have for quite some time. Meaning for each app server that I add I can expect to handle an additional ~20 req/s. With that in mind, I wanted to see what would happen if I enabled some caching on this cluster.

Gotta Go Fast

In my previous test of vertical scaling I used WP Total Cache to make things go quick. WP Total Cache generates static HTML pages for your site and then serves those. The benefit being that static pages are extremely fast to serve. In this test I wanted to try a more dynamic approach using Memcached and W3 Total Cache. W3 Total Cache takes a very different approach to caching by storing pages, objects, and database queries in Memcached. In general this caching model is more flexible, but possibly a bit slower. I installed Memcached on the same server as the NFS mount because it was under utilized. In a real production scenario I wouldn’t violate this separation of concerns.

Once I enabled W3 Total Cache and re-ran the last test I got some pretty great results.

Boom.

With W3 Total Cache enabled and 5 app servers we settled in at ~370 requests/second. More impressive is that we only saw 5 failures during the entire test. For perspective Kernl pushed 1,329,470 requests at the WordPress cluster I created. Thats a failure rate of 0.0003%.

My favorite part of this test was the response time distribution. Without having to wait on MySQL for queries the response times became crazy good.

The “bad” outlier is only 2.5s.

99% of requests finished in 29ms. And the outlier at 100% was only 2.5 seconds. Not bad for WordPress.

Going Further

Being the good software developer that I am I wanted to push this setup to it’s limits. So I decided to try a test that is an order of magnitude more difficult:

20,000 users, 10 users/sec ramp up, for 60 minutes, from London

Things didn’t go great but not because of WordPress. I won’t show any graphs of this test but I started to get limited by the network card on the NFS/Memcached machine. Digital Ocean says that I can expect around 30MB/sec out of a given droplet and with this test I was starting to bump in to that limit. If I wanted to test it further I would have had to load balance Memcached which felt a little bit outside of scope. In a real production scenario I would likely pay for a hosted Memcached service to deal with this for me.

Conclusions

With Kernl I’m always weighing the build versus buy question when it comes to infrastructure and services. Given how much effort I had to put in to making this setup horizontally scalable and how much effort it would take to make it reproducible and manageable, it hardly seems worth creating and managing my own infrastructure.

Aside from my time the cost of the hardware was also not cheap.

  • Load¬†Balancer – $10 / month
  • MySQL Database – $27 / month
  • Memcached (if separate from NFS) – $5 / month
  • NFS Mount (if separate from Memcached) – $5 / month
  • Application Servers – $25 / month ($5 / month * 5 servers)
  • Total – $72 / month

At $72 / month I could easily have any of the managed WordPress hosting companies (GoDaddy, SiteGroup, WPEngine, etc) run my setup, handle updates, security, etc. The only potential hiccup is the traffic limits they place on your account. This setup can handle millions of requests per day and while their setups can too, they’ll charge you a hefty fee for it.

As with any decision about hardware and scaling the choice varies from person to person and organization to organization. If you have a dedicated Ops team and existing hardware, maybe scaling on your own hardware makes sense. If you’re a WordPress freelancer and don’t want to worry about it, maybe it doesn’t. IMHO I wouldn’t scale WordPress on my own. I’d rather leave it to the professionals.

Interested in testing your WordPress scalability? Check out the Kernl WordPress Load Testing beta program!

Load Testing the ChemiCloud Managed WordPress Hosting Service

At the beginning of December Kernl launched a closed beta for our WordPress Load Testing¬†service. As part of the bug shakedown we’ve been spending some time load testing different managed WordPress hosting services. Some of previous tests include WordPress.com, CloudWays, and GoDaddy. For this test, we turned our sights on ChemiCloud.

How do we judge the platform?

Using Kernl’s load testing feature we run 3 different load tests against the target system.

  • The Baseline – This is a simple baseline load test that we use to verify that our configuration is correct and that the target can handle even minor traffic. It consists of 200 concurrent users, for 10 minutes, ramping up at 2 users / second, with traffic originating in San Francisco.
  • Sustained Traffic – The sustained traffic test mimics what traffic might look like for a read-heavy website with a lot of visitors. This load test consists of 2000 concurrent users, for 2 hours, ramping up at 2 users / second, with traffic originating from San Francisco.
  • Traffic Spike – This test is brutal. We use it mimic the sort of traffic that your WordPress site might experience if a link to it were shared by a Twitter or Instagram celebrity. The load test consists of 10,000 concurrent users, for 1 hour, ramping up at 10 users / second, with traffic originating from San Francisco.

All traffic for this test is generated out of Digital Ocean’s SFO2 data center.

What ChemiCloud plan was used?

ChemiCloud has several different tiers for managed WordPress hosting. We decided on the “Oxygen” plan. At a high level this seemed to align well with the hosting that we tested thus far.

ChemiCloud - Oxygen Plan
ChemiCloud – Oxygen Plan

Caveats

This load test is intentionally simple. It is read heavy. Many WordPress sites have this sort of traffic profile, but not all do. If you need to perform a WordPress load test with a different traffic profile Kernl supports this. Ideally we should also do multiple tests over time to make sure that this test wasn’t an outlier. Future load test articles will hopefully include this sort of rigor but for now this test can give you reasonable confidence in how you can expect ChemiCloud to perform under a read-heavy load.

The Baseline Test

200 concurrent users, 2 users / s ramp up, 10 minutes, SFO

As most of the hosting providers that we test do, ChemiCloud performed well on the baseline test. They settled in at right around 25 requests / second.

ChemiCloud - Requests
ChemiCloud – Requests

We did see a few failures towards the end of the test, but it appears that it was only a spike. Once the spike passed we didn’t see any more errors for the duration of the test.

ChemiCloud - Failures
ChemiCloud – Failures

The response time distribution for ChemiCloud was solid for this baseline test. 99% of requests finished in 550ms. If we go further down the distribution you can see that 95% of requests finished in ~250ms which is quite good. Even the 100% outlier still wasn’t that bad.

ChemiCloud - Response Time Distribution
ChemiCloud – Response Time Distribution

Sustained Traffic Test

2000 concurrent users, 2 users / s ramp up, 2 hours, SFO

For the sustained traffic test ChemiCloud did a great job serving requests while keeping response times down. As you can see from the graph below, the test settled in to right around 260 requests / second. The journey to that many users was smooth and there aren’t any surprises on the graph.

ChemiCloud - Requests
ChemiCloud – Requests

There were a few failures during the test period, but it appears that they were only a temporary blip. You can see that about half-way through the test we ran into ~32 failures. After that we didn’t see any more for awhile, and then we had one more before not seeing any again for the rest of the test. For some perspective, we performed 1,861,230 requests again ChemiCloud and only 33 failed. Thats a failure rate of 0.0017%! Nice work team ChemiCloud.

ChemiCloud - Failures
ChemiCloud – Failures

The response time distribution was pretty great for the sustained test as well. While there was an outlier at 100% (which is common), 99% of requests finished in under 400ms. Thats an effort worthy of praise with WordPress!

ChemiCloud - Response Time Distribution
ChemiCloud – Response Time Distribution

Traffic Spike Test

20000 concurrent users, 10 users / s ramp up, 1 hour, SFO

The traffic spike load test is brutal for any host. Nobody ever expects to see this kind of traffic out of nowhere so few are prepared for it. ChemiCloud handled the traffic rather well though. We eventually reached 1200 requests / second which is pretty impressive for a plan that costs $17.95 a month. There weren’t any surprises on the way up to that level of traffic, but as you’ll see we did start to see error rates increase.

ChemiCloud - Requests
ChemiCloud – Requests per Second

At about 15 minutes into the load test we started to see an uptick in failure rates. The rate of failure stayed consistent throughout the test after that. This is a fairly common pattern when hosts become overloaded with traffic. In general ChemiCloud performed well even with these failures. We sent 4,332,244 requests to ChemiCloud over an hour period and 134,893 failed. For this sort of load test a failure rate of 3.1% isn’t bad.

ChemiCloud - Failures
ChemiCloud – Failures

The most interesting graph from this load test was the response time distribution. You would expect to see a general degradation of response time performance as request failures increased but that wasn’t the case at all. Everything below the 99th percentile performed remarkably well considering the traffic we threw at it. 98% of requests finished in under 370ms. Great work!

ChemiCloud - Response Time Distribution
ChemiCloud – Response Time Distribution

Conclusions

ChemiCloud competes well with the other hosts that we’ve tested. They have a solid price-point and you get a lot of control over your WordPress environment. If you need a host that can handle some solid traffic spikes they are a good choice.

Want to be part of the Kernl WordPress Load Testing Beta? Sign up and then send an email to jack@kernl.us

Load Testing the WordPress.com Managed WordPress Service

This December Kernl launched it’s new WordPress Load Testing service. As part of the bug shakedown we decided to load test as many managed WordPress providers as we could. In this test, we turn our sights to WordPress.com.

How do we judge the platform?

For this series of blog posts we judge the platform via 3 different load tests.

  • Baseline – This test is for 200 concurrent users, for 10 minutes, with a 2 user / second ramp up. We use this test to double-check our configuration before throwing heavier load at the provider.
  • Sustained Traffic – 2000 concurrent users, for 2 hours, ramping up at 2 users / second. This test represents what a high traffic WordPress site might see on a day to day basis.
  • Traffic Spike – The traffic spike test simulates what might happen if a Twitter or Instagram celebrity mentioned your site. 20,000 concurrent users, for 1 hour, ramping up at 10 users/s.

For all 3 tests traffic is generated out of Digital Ocean‘s San Francisco 2 (SFO2) data center.

What WordPress.com plan was used?

For this test we used the Free WordPress.com plan. We didn’t need any of the bells and whistles, plus (as you’ll see later) the performance didn’t suffer at all. If performance had been impacted we would have increased our plan to somewhere around the $10/month mark. As with all of our load tests we don’t do any configuration. We simply import the content of http://www.re-cycledair.com and then start testing.

WordPress.com Free Plan
WordPress.com Free Plan

The Baseline Test

200 concurrent users, 2 users / s ramp up, 10 minutes, SFO

As expected WordPress.com did very well during the baseline test. The test settled in at 26 req/s.

WordPress.com Load Test - Requests per second
WordPress.com Load Test – Requests per second

There also weren’t any failures through the duration of the test.

WordPress.com Load Test - Failures
WordPress.com Load Test – Failures

The response time distribution was also excellent with 99% of requests returning in under 200ms.

WordPress.com Load Test - Response Time Distribution
WordPress.com Load Test – Response Time Distribution

The Sustained Traffic Test

2000 concurrent users, 2 users / s ramp up, 2 hours, SFO

The sustained traffic test is where WordPress.com’s hosting really started to shine. As of now it is hands down the best host that we’ve tested. The throughput settled in at around 258 requests / second. There also weren’t any surprises on our way up to that number of requests.

WordPress.com Load Test - Requests per second
WordPress.com Load Test – Requests per second

The most impressive part about this entire load test was the failure rate. Over a 2 hour test, under heavy load, for more than 1.4 million requests, not a single request failed. Thats some serious stability.

WordPress.com Load Test - Failures
WordPress.com Load Test – Failures

While not as impressive as the 0% failure rate, the response time distribution was still pretty amazing. 99% of all requests finished in well under 100ms. There was an outlier in the ~1500ms range, but that isn’t uncommon for load tests.

WordPress.com Load Test - Response Time Distribution
WordPress.com Load Test – Response Time Distribution

The Traffic Spike Test

20000 concurrent users, 10 users / s ramp up, 1 hour, SFO

WordPress.com is blazing fast. It didn’t event flinch with 20000 concurrent users. The request rate settled in at 1717 requests / second (!). On the way up to that request rate there were no surprises or stutter steps.

WordPress.com Load Test - Requests per second
WordPress.com Load Test – Requests per second

The failure rate was exceptional as well. For an hour long test, with sustained heavy load, and a total of 4.3 million requests, there were 0 errors.

WordPress.com Load Test - Failures
WordPress.com Load Test – Failure Rate

Finally, the most impressive graph in this entire test! For the traffic spike test, WordPress.com’s distribution chart is nothing short of fantastic. 99% of traffic had response times below 50ms, and even the 100% outlier was still only 1 second. Great work WordPress.com team!

WordPress.com Load Test - Response time distribution
WordPress.com Load Test – Response Time Distribution

Conclusions

If you need extremely robust performance and are OK with the restrictions of WordPress.com they seem like a great choice.

Want to be part of the Kernl WordPress Load Testing Beta? Sign up and then send an email to jack@kernl.us

Load Testing GoDaddy’s Managed WordPress Service

Earlier this December Kernl launched a closed beta of our WordPress load testing service. As part of that beta we’ve decided to run a series of load tests against some of the common managed WordPress hosting services.

GoDaddy was chosen as our first load test target for a few different reasons:

  • GoDaddy has been around for¬†ages.
  • They offer a managed WordPress platform
  • They are fairly inexpensive for the service that they are offering.

How will providers be judged?

There are a number of different ways to judge a WordPress hosting provider. How reliable are they? Do they perform patches for you? What is their customer support like? How fast are they? For the purpose of our tests we’re focusing on raw speed under heavy load. We will only be judging the hosting providers on that metric. To test the speed of the hosting provider under heavy load we ran 3 tests:

  • The Small Test – 200 users, for 10 minutes, ramping up at a rate of 2 users per second. We did this test to check our configuration before we ran more intense load tests.
  • The Sustained Traffic Test – 2000 users, for 2 hours, ramping up at a rate of 2 users per second. This test was performed to see how GoDaddy’s WordPress hosting would perform under a sustained heavy load.
  • The Traffic Spike Test – 20000 users, for 1 hour, ramping up at a rate of 10 users per second. This test was used to determine how GoDaddy would handle lots of traffic coming in at once versus the slower ramp up of the sustained traffic test.

There was no configuration or tweaking done on the GoDaddy WordPress install. We simply imported the content of http://www.re-cycledair.com and started testing.

The GoDaddy WordPress Plan

An important part of this load test was which GoDaddy WordPress hosting plan was selected. As we’re going to try and do this across multiple different providers we’ve opted to go for plans based roughly on price. This plan was the “Deluxe Managed WordPress” plan that costs $12.99 / month.

GoDaddy WordPress Deluxe plan

Load Test Location

For these three load tests we generated traffic out of Digital Ocean’s SFO2 (San Francisco, CA, United States) data center.

The Small Test

200 concurrent users, 10 minutes, 2 user / sec ramp up, San Francisco

The requests graph represents the number of requests per second that the site under load is serving successfully. From the graph below, you can see that GoDaddy had no problem serving 200 concurrent users. After the ramp up completed things settled in at around 25 requests / second.

GoDaddy load testing - requests
GoDaddy WordPress Hosting – 200 concurrent users

The failure graph shows that during this load test there weren’t any reported failures.

GoDaddy load testing - failures
GoDaddy WordPress Hosting – 200 concurrent users

The final graph of the 200 concurrent user small test is the distribution graph. This is probably the most important part of these tests because it helps you understand what your end user experience will be like when your site is under heavy load.

GoDaddy load testing - distribution
GoDaddy WordPress Hosting – 200 concurrent users

To understand the graph select a column. We’ll look at the 99% column. Now read the value of the column (~600ms). You can now say that for this load test 99% of all requests finished in under 600ms. If you look at the 95% column you can see that the results are ~200ms which is pretty fantastic. The 100% column is almost always an outlier, but even in this case having 1% of requests finish between 500ms – 2200ms seems ok.

The Sustained Traffic Test

2000 concurrent users, 2 hours, 2 user / sec ramp up, San Francisco

The requests graph for the sustained traffic test yielded a nice curve. The traffic ended up leveling out at 252 requests / second. The transition along the curve was smooth and there weren’t any obvious pain points for request throughput during the test.

GoDaddy load testing - requests
GoDaddy WordPress Hosting – 2000 concurrent users

The failure graph for this set of tests is particularly interesting. About 10 minutes into the test we see a HUGE spike in errors. After a short period of time the errors stop accumulating. I’m not sure what happened here, but I suspect that some sort of scaling event was triggered in GoDaddy’s infrastructure. After the scaling event completed they were able to continue serving traffic. We didn’t see any more errors for the rest of the test.

GoDaddy load testing - failures
GoDaddy WordPress Hosting – 2000 concurrent users

For the distribution graph of this load test I would argue that GoDaddy performed very well under some fairly intense load. 99% of requests were finished in 460ms. There is obviously an issue with that other 1%, but that was likely due to the weird error event that happened at around the 10 minute mark.

GoDaddy load testing - distribution
GoDaddy WordPress Hosting – 2000 concurrent users

Overall GoDaddy performed far better than I expected on the sustained traffic test. I personally haven’t used GoDaddy as a WordPress host in ages, but for this one metric (performance under load) I think they really did a great job.

The Traffic Spike Test

20000 concurrent users, 1 hour, 10 user / sec ramp up, San Francisco

The traffic spike test is absolutely brutal but is definitely the kind of traffic you can expect if you had an article or site shared by a Twitter celebrity with a large following.

The requests graph for this test is by far my favorite out of this entire article. It shows linear growth with no slowing down. For reasons highlighted later I killed this test at ~10 minute mark, but up until that point GoDaddy was a rocket ship. At the point I stopped the test we were running at 483 requests / second.

GoDaddy load testing - requests
GoDaddy WordPress Hosting – 20000 concurrent users

The failure graph for this test is interesting as well. You can see that all was well until about 9 minutes in when errors increased sharply. I could have continued the load test but chose to stop it at this point due to the increased error rates. In hind sight I should have continued the test. Next time!

GoDaddy load testing - failures
GoDaddy WordPress Hosting – 20000 concurrent users

The most impressive aspect of the traffic spike test was the distribution chart. Even under some incredibly high load (for a WordPress site), GoDaddy was still returned 99% of requests in under 500ms. Great work team GoDaddy!

GoDaddy load testing - distribution
GoDaddy WordPress Hosting – 20000 concurrent users

Conclusions

For the single metric of speed and responsiveness under heavy load I think that GoDaddy’s managed WordPress solution did a fantastic job of handling the load that the Kernl WordPress load testing tool was throwing at it. If you have a site with¬†really¬†high traffic, GoDaddy should be on your list of hosts to check out.

Want to run load tests against your own WordPress sites? Sign up for Kernl now!

Tips to Keep a WordPress Site Safe

If you want to get a website developed for your business you should definitely take a look at WordPress. It can be considered one of the most convenient methods available for getting a website up and running. But when you are building your website with WordPress, you should never overlook security.

When you are done with creating your website you are open to a whole world. If you have not integrated the necessary security precautions your website might face some serious security issues. Your website might contain sensitive data like; customers credit card details and personal details. Therefore, if you have not taken precautions, you and your customers may be exposed to a considerable risk of being hacked.

Once a hacker gets into your website they can not only steal your information but also destroy all the information of your website. The most dangerous part of such destruction is you can never undo such disastrous act. Therefore it is your duty to take the necessary precautions to discourage hackers and keep your data safe and let the business continue to run smoothly.

Below are some security precautions you can think consider for your website. By taking these steps you can help seriously reduce the chance that you’ll get hacked.

1.  Stay Updated

Be aware of the latest threats in the digital world. You should at least know the basic information of the possible threats and which will help you decide what  precautions you should take. There are few websites which can assist you to get up to date information. You can take a look at those websites and then you will be able to stay updated with ease.

2.  Lock Down Your Access Control

Make sure that your basic precautions are tough enough to discourage a hacker. Never use easy-to-guess passwords and usernames. It is much better if you change the default database prefix from wp to a harder and more random prefix. You can even limit login attempts. If there is more than one user on your WordPress website, you need to introduce strong security policies for all of those users. Following these precautions can give you the peace of mind of knowing that your website cannot be hacked by guessing the password.

3.  Software updates are crucial

It is important to update your software regularly. Do not neglect the updates. What if the most recent update is about security vulnerability and you have neglected it? Hackers are quick to become familiar with security issues in WordPress, so take advantage of the security updates that WordPress provides. Ideally you should automatically install all core WordPress updates.

4.  Install Security Ninja

If you’re wondering how to keep your WordPress site safe, you must take a look at the security plugins available as well. That‚Äôs where Security Ninja comes into play. It is one of the best and most popular security plugins available WordPress users. Security Ninja makes it easy to secure your WordPress site.

Security Ninja only takes a minute to scan your whole site for security issues and then helps you deal with them one by one. Easy and efficient!

WP Security Ninja

5.  Maintain a strong network security system

You should pay a good attention to the security your network. You never know if users in your office inadvertently provide easy access to your servers. Therefore, make sure that you have taken action to:

  • Expire user logins after a short idle period
  • Frequently change the passwords
  • Use strong passwords and never write them down.
  • Scan all the peripherals whenever you use them with the computer

6.  Try a Web Application Firewall (WAF)

WAF can be software, hardware or can be a combination of both. It stands between your web server and the data connection to read each and every bit of data transmitted. WAFs have the ability of blocking malicious accesses and hacking attempts to provide you the peace in mind. There are some built-in firewall options available in WordPress for you to try out. You can take a look at them as well. 

Again, we advise checking the Security Ninja PRO plugins which now feature a Cloud Firewall that will protect you.

7.  Install security applications

Although not affective as WAF’s, security applications can make hacking attempts difficult to a certain extent. There are free and paid applications available to download. These are good in preventing automated hacking attempts

8.  Use SSL

You have the option of using an encrypted SSL protocol when transferring your users’ sensitive data. In fact, this can protect your information from being read by unauthorized persons. Setting up SSL with WordPress is pretty easy and should be considered an important step in securing any WordPress installation.

9.  Backing up

Last but not least, you should back up your website’s data both online and offline very often. This will be useful if any damage is caused to your website. Use an automated backup system to backup the website multiple times a day.

Kernl Now Supports Update Icons

Have you ever wished that Kernl supported plugin update icons? Well, your wish is our command!

Before

As you can see, before this change the icon displayed on the WordPress update dashboard for Kernl-based plugins was the default “power cord” image.

Kernl Plugin Update Icon - Before

After

Now you can upload your own icon to Kernl and have it displayed in the update dashboard.

Kernl Plugin Update Icon - After

Getting Started

Using the new plugin update icon feature is easy.

  1. Add the latest version of the plugin_update_check.php file to your plugin.
  2. Upload an icon (64×64) to Kernl in the plugin meta tab.

How to upload new plugin icon

That’s it! Deploy your update so that all of your customers get the new plugin_update_check file and Kernl will start serving your update icon when you release your next update.

If you have any questions or need help getting set up, shoot and email to jack@kernl.us

Feature Flags – Managing a WordPress Beta Program

Let’s say that you’re an author of a premium (i.e. paid for) WordPress plugin or theme. You’ve been hard at work on an amazing new feature but it really needs some testing before it goes out to all of your customers. How do you manage this process? What’s the best way to get new code into the hands of your beta users with the least effort on your part? This blog post is going to walk you through the process of using Kernl Feature Flags to easily manage a beta program without having two separate builds of your product.

Table of contents

What is a feature flag and how does it work?

Feature flagging is a software best practice for controlling the release of features (sometimes called “gating”). Feature flagging is important because it allows you to turn features on/off without having to do a deploy. But how does this impact you? How does it make things easier?

Feature flags allow you to manage beta programs with a single version of your product. What most people currently do is have 2 version of their product at any given time: The live version (what everyone sees) and the beta version (what your beta users see).¬† Wouldn’t it be easier to just have one version, one deployment process, and highly granular control over who sees what features?

Feature Flags Product View

For example, in the image above you can see that I have two feature flags: “GitLab CI” and “Download Invoice”. Right now they are both active and people can see the features that they represent. If I decided to change “Download Invoice” to inactive, the feature would be immediately deactivated in my plugin. I wouldn’t have to do another deploy and release a new version to make it happen. It happens automatically in the code that’s already with your customers.

Seems great right? Let’s do a full example so you can truly appreciate the power of feature flags.

Example: Adding a feature flag to the Kernl Example Plugin

The Kernl Example Plugin is intentionally¬†very simple. The goal of the plugin is to show off Kernl’s various features while not overwhelming the person who is looking at it. To illustrate how and why feature flags are awesome, let’s add a simple setting to the “Settings -> General” menu.

The example plugin currently looks like this:

<?php
/**
* Plugin Name: Kernl Example Plugin
* Plugin URI: https://kernl.us
* Description: The Kernl Plugin for testing.
* Version: 3.3.0
* Author: Jack Slingerland
* Author URI: http://re-cycledair.com
*/
require 'plugin_update_check.php';

$MyUpdateChecker = new PluginUpdateChecker_2_0 (
    'https://kernl.us/api/v1/updates/5544bd7e5b8ae0fc1fa5e7a5/',
    __FILE__,
    'kernl-example-plugin',
    1
);
?>
All it does is require “plugin_update_check.php” and instantiate the update checker. Let’s make things a little more complicated by adding a setting to the example plugin.

Add a setting to the plugin

// This is added below plugin update instantiation.

function feature_flagged_settings_api_init() {
   add_settings_section(
        'feature_flagged_setting_section',
        'Feature Flag Example Settings in General',
        'feature_flagged_setting_section_callback_function',
        'general'
    );
   add_settings_field(
        'feature_flag_setting_name',
        'My Feature Flag Setting',
        'feature_flag_setting_callback_function',
        'general',
        'feature_flagged_setting_section'
    );
   register_setting( 'reading', 'feature_flag_setting_name' );
}
add_action( 'admin_init', 'feature_flagged_settings_api_init' );

function feature_flagged_setting_section_callback_function() {
   echo '<p>This section is hidden completely behind a Kernl Feature Flag.</p>';
}

function feature_flag_setting_callback_function() {
    echo '<input name="feature_flag_setting_name" id="feature_flag_setting_name" type="checkbox" value="1" class="code" ' . checked( 1, get_option( 'feature_flag_setting_name' ), false ) . ' /> This checkbox is hidden behind a feature flag.';
}
¬†The above code simply uses the ‘admin_init’ hook to call the WordPress Settings API and add a menu item. It looks like this when you run the code:
Feature Flags Example Setting Image
¬†Awesome! We’re off to a great start. Now let’s wrap this in a feature flag so only our beta user’s can see it.

Create an On/Off feature flag

Kernl has extensive documentation for feature flag usage, but it all boils down to:

  1. Add the feature flag library to your plugin/theme.
  2. Create a feature flag product in Kernl. A good rule here is 1 plugin / theme to 1 feature flag product.
  3. Create a flag.
  4. Instantiate the feature flag library and wrap your code.
  5. Manage who see’s the feature using Kernl.

So let’s do step one. If you’re using Composer, follow the directions in the feature flag documentation. If not, you can go to¬†https://github.com/wpkernl/WPFeatureFlags and download the¬†WPFeatureFlags.php file and drop it into your plugin or theme.

<?php
// ... snip ...
require 'plugin_update_check.php';
require 'WPFeatureFlags.php';
// ... snip ...
Easy. Next, go to Kernl and add a new product in the Feature Flags section.
Feature Flags Add Product Button
When you click “Add Product” you’ll get to choose a product name. Since I’m using feature flags with my example plugin, I’ll name mine “Kernl Example Plugin”.
Feature Flags Add Product Modal
After you click save, you’ll see the new product at the bottom of your feature flag product list.
Feature Flags Created product in product list
Now here is something important! See the “key” next to your product’s name? You’ll need that to instantiate the WPFeatureFlags class. I’d go ahead and copy it to your clipboard now.
Next up, let’s add a simple feature flag for this new setting we’re adding. This is the thing that will control visibility for all of our users. Click “Manage Flags” in the product menu, and then click the “Add Flag” button. You’ll be presented with this screen.
Add / Edit Feature Flags screen
All the options look straight-forward, but let’s go over them anyways.
  • Active – Will this feature be toggled on or off for everyone.
  • Name – A descriptive name. I’m filling this in with “General Setting Checkbox Example”.
  • Identifier – This is how we identify the flag in code, so I like to make it short and easy to understand. I’ll be using “GENERAL_SETTINGS_CHECKBOX”.
  • Flag Type – Kernl has 3 different types of feature flags for your enjoyment.
    • On/Off – As expected, this toggles the feature on and off for every user. No granularity here, but super useful for quickly disabling things. We’ll be starting with this flag.
    • Individual – You can select specific users that a feature will be toggled on for. This is what we’ll be using eventually, but there are some caveats that come with it.
    • Percentage – Kernl will roll out your feature to a percentage of your user base. Nice if you don’t want to specify individual users, but also don’t want the feature turned on for everyone.

Edit feature flags example

When things are filled out, press save and get ready to move on.

Now that we have our feature flag created, let’s instantiate the WPFeatureFlag class and wrap our code.

<?php
/**
* Plugin Name: Kernl Example Plugin
* Plugin URI: https://kernl.us
* Description: The Kernl Plugin for testing.
* Version: 3.3.0
* Author: Jack Slingerland
* Author URI: http://re-cycledair.com
*/
require 'plugin_update_check.php';
require 'WPFeatureFlags.php';

$MyUpdateChecker = new PluginUpdateChecker_2_0 (
  'https://kernl.us/api/v1/updates/5544bd7e5b8ae0fc1fa5e7a5/',
  __FILE__,
  'kernl-example-plugin',
  1
);

// We add the feature flag code inside the init() function
// so that we can have access to who the current user is.
function feature_flagged_settings_api_init() {
  // The feature flag product key. Remember the key I said you should add to your clipboard? This is it.
  $kernlFeatureFlagProductKey = '5a24035ee48da05271310a71';

  // The user identifier is how Kernl identifies the user requesting flags.
  // This should be unique for every user.
  $current_user = wp_get_current_user();
  $user_login = $current_user->user_login;
  $site_url = get_site_url();
  $userIdentifier = "{$site_url} - {$user_login}";

  $kff = new kernl\WPFeatureFlags($kernlFeatureFlagProductKey, $userIdentifier);

  // This says "For the product defined above, does this flag exists, and if so, is it active for the given user?".
  if ($kff->active("GENERAL_SETTINGS_CHECKBOX")) {
    add_settings_section(
      'feature_flagged_setting_section',
      'Feature Flag Example Settings in General',
      'feature_flagged_setting_section_callback_function',
      'general'
    );

    add_settings_field(
      'feature_flag_setting_name',
      'My Feature Flag Setting',
      'feature_flag_setting_callback_function',
      'general',
      'feature_flagged_setting_section'
    );

    register_setting( 'reading', 'feature_flag_setting_name' );
  }
}

add_action( 'admin_init', 'feature_flagged_settings_api_init' );

function feature_flagged_setting_section_callback_function() {
  echo'<p>This section is hidden completely behind a Kernl Feature Flag.</p>';
}

function feature_flag_setting_callback_function() {
  echo'<input name="feature_flag_setting_name" id="feature_flag_setting_name" type="checkbox" value="1" class="code" '.checked( 1, get_option( 'feature_flag_setting_name' ), false ) .' /> This checkbox is hidden behind a feature flag.';
}

?>
Now that we have the feature flag in our code, let’s talk about some of the optimizations the WPFeatureFlag library has. One of the nice things about WordPress and PHP is that the code itself is stateless. Meaning that without some storage mechanism (MySQL, Redis, Memcache) the entire page and all of it’s data is rebuilt from scratch on every request. This is great for fast development cycles but not always for performance.
The WPFeatureFlag library helps with performance by storing flags for a given user as a WordPress transient for 5 minutes. This way repeated page requests by a user don’t constantly call Kernl and introduce network latency on the request. Kernl’s feature flag API is heavily cached but it’s better for the end user if flags are served out of your own store. What this means for you is that your user’s won’t see changes for a maximum of 5 minutes when you toggle a flag on/off. This is usually fine, but if you need shorter or longer intervals you can use the API directly.
That being said, go ahead and toggle the feature flag off in Kernl. In 5 minutes (or less) you’ll see the setting disappear from the admin. No code deploy needed. “That’s great!” you say, but what about beta program management? Easy. Let’s change this flag to an “individual” flag.

Create an individually targeted feature flag

The video below show’s you how to create an individually targeted feature flag. It’s the same as the on/off flag, except that you get to pick which user’s see the feature. One caveat with this is that if Kernl hasn’t seen the user yet we can’t target them. Why? Because we don’t know how to identify a user that we haven’t seen. If you went to target an individual user without having identified them yet, you would need to register them manually.

Now that the flag is targeted at an individual, that selected person will be able to see the menu setting. This is a contrived example, but you can see how this can be easily expanded to running a beta program. The best part is that you don’t need to have multiple versions of your plugin/theme out there. You can simply release one version, and toggle on features for specific people. In the future Kernl will support making groups of users so managing beta programs will be even easier!

If you have questions feel free to drop them in the comments!

Further Reading on Feature Flags

There’s a lot of great reading out there on feature flags and their uses. If you’re looking for more information about them, I highly recommend these resources.

Introducing WordPress License Management with Kernl

For the past several years Kernl¬†has been trusted with securing access to many people’s hard work via our license management system. We recently re-imagined our entire WordPress license management system, so we want to introduce it to you.

WordPress License Management

If you’ve ever sold a plugin or theme out on the open market, worrying about your plugin getting pirated is often at the top of your mind. One way to mitigate some of that risk is to use a license management solution. Kernl’s new license management system allows you to restrict access to your plugin or theme by forcing customers to activate before functionality is enabled. We can also check license codes before updates to your plugin or theme are downloaded, allowing you to restrict how many free upgrades a customer receives.

To summarize:

  • Kernl allows you to manage license keys for your product.
  • Kernl will restrict the number of updates a license is allowed to download for your product.
  • Kernl has a REST API that can allow you to restrict usage of your plugin until a license has been activated.

License Management Example

So how might you use Kernl’s WordPress license management? An example will illustrate this best.

The example above has a function to validate if a user’s license is valid. This can be used anywhere in your code to expose functionality only if the Kernl license is valid.

Restricting Update Downloads with WordPress License Management

If you would like to simply prevent your customers from downloading updates to your plugin or theme for free, just add the license parameter when you instantiate the Kernl update check class. This works the same for both plugins and themes.

The only difference between the sample above and a normal Kernl update instantiation is the inclusion of the ‘license’ property, which tells which license to try and validate with.

Going Forward

Want to give Kernl WordPress license management a try? Check out https://kernl.us and sign up. It’s free for 30 days and doesn’t require a credit card! In addition to license management and updates, we also have some great features like WordPress continuous deployment and feature flags.

Private Premium Plugin Updates with Kernl.us

If you’ve ever created a plugin for WordPress and wanted to sell it you’ve likely run in to the problem of delivering updates to your customers. Agencies and internal developers run in to this problem as well. ¬†You can’t upload your plugin to the WordPress.org repository because then it will be free for everyone, but you still¬†really want integrated update functionality.

Kernl.us is a SaaS product that helps solve this problem (and so many others!). Kernl allows you to distribute updates to your premium plugin automatically using the built-in WordPress update functionality. So how does it work?

  1. Sign up for Kernl
  2. Create an entry for your plugin in Kernl
  3. Add 2 lines of code to your plugin.
  4. Upload your plugin to Kernl and then distribute it to your customers

Lets dive in an see how this works!

Creating a Plugin in Kernl

After you’ve signed up for Kernl, the first step to configure seamless automatic updates is to create a plugin entry in Kernl. To do so, click ¬†the “Plugins” button in the left-hand menu.

Next, click the “Add Plugin” button.

The next step is easy. Just enter the name, slug, and description of your plugin then press “Save”.

Adding Kernl Update Code

Now that you have a plugin entry in Kernl, you can add the Kernl update code to your plugin. Download the Kernl plugin update code from https://kernl.us/static/php/plugin_update_check.php. and place it in the root directory of your plugin. Next, take note of the UUID of the plugin that you just created.

In your plugin’s main file, add the following code:

require 'plugin_update_check.php';
$MyUpdateChecker = new PluginUpdateChecker_2_0 (
    'https://kernl.us/api/v1/updates/MyUuidFromKernl/',
    __FILE__,
    'kernl-example-plugin',
    1
);

Replace “MyUuidFromKernl” with the UUID of the plugin you just created.

Uploading Your Plugin to Kernl

Now that you have Kernl inside of your plugin you need to zip it up. At the folder level, go ahead and zip the plugin using the zip tool of your choice.

If you were to extract your plugin, it should look like:

/my-plugin-slug
   plugin_update_check.php
   functions.php
   someOtherFile.php

If it looked like this (notice there is no nesting), Kernl will not work:

plugin_update_check.php
functions.php
someOtherFile.php

Take your plugin and click “Add Version” inside Kernl.

Next enter the version number (of the format MAJOR.MINOR.PATCH, ex 1.4.14), select the zip file you just created, and press “Save”.

Distribute Your Plugin

Now that Kernl has this version of your plugin, feel free to distribute this ZIP file to your customers. If you ever need to release an update, just make your code changes, zip them up, and upload the new version to Kernl. Within 30 seconds the update will be visible to your customers at which point they can download it!

Kernl: Important BitBucket Changes

It came to my attention that the way BitBucket handles deployment keys has changed. Until recently the same deployment key could be shared across multiple repositories. That rule has been changed and now each repository requires a unique deployment key. So what does this mean for you? You’ll need to take a few steps to make sure that your “push to build” functionality continues to work as you expect it to.

  1. I’ve deployed changes that allow you to add unique deployment keys to all of your repositories. For those of you with a lot of repositories this is going to be pretty tedious, but in the end it will give you greater access control to your repositories. Documentation for adding deployment keys can be found at¬†https://kernl.us/documentation#deploy-key¬†, but you likely won’t need it. Just go to ‚ÄúContinuous Deployment‚ÄĚ and then click ‚ÄúManage Deployment Keys‚ÄĚ (if you don’t see that button, hard refresh).
  2. Starting tomorrow (February 21, 2017) at 7pm EST, access with the old Kernl deployment key will be cut off. From this point forward only the new deployment keys will be able to access your repository.
  3. After February 21, 2017 @ 7pm EST you can delete the old Kernl deployment key from your repositories. If you do it before then your builds will fail.

Sorry for the short notice and inconvienience of this change, but it’s necessary to make sure that all customers are able to deploy continuously with Kernl. If you have any questions or concerns about this change, please reach out. And once again, sorry for this inconvience!