Here are some steps that we’ve seen to have the most impact:

1. Enable caching

Caching is probably the single biggest speed to boost step while optimizing your server.

We were able to cut the load time by more than 50% in many of the sites we manage.

With caching, the server does not have to spend time fetching files from the disk, executing the application code, fetching database values and assembling the result into an HTML page EACH TIME someone refreshes a page.

The server can just take a processed result, and send it to the visitor. See how easy that is?

There are several locations in which you can enable cache:

  • OpCode cache – This is compiled results of previous page requests. Can save several seconds for complex applications like Magento or Drupal.
  • Memory cache – This stores bits of data generated by apps in system memory, and when the same bit of data is requested, it is served without the need for processing. Faster than OpCode cache, and ideal for large load balanced sites.
  • HTTP cache – These are web server proxies that stores whole HTML pages. So if the same page is requested, it is immediately served. This is by far the fastest and is ideal for high traffic smaller web apps.
  • Application cache: Some applications such as Magento and Drupal stores processed template files as pages to reduce processing time. This can be used in conjunction with any of the above caches.

Any of these caches can improve your server speed. BUT, you’ll need to do a bit of trial and error to know which combination of these caches is ideal for your application.

2. Setup a fast reverse proxy

Your server sends HTML files to a visitor’s browser.

What if another visitor requests the same file?

Normally your server fetches the scripts from the disk, executes it, fills in the data and assembles the HTML file. But, wouldn’t it be easy and so much faster to just send that file from memory?

That’s what an HTTP reverse proxy does. It sits between your server and the visitors. If a second customer asks for the same file, it’ll quickly serve the file from memory. That’s super quick.

Almost all popular web servers can be configured as a reverse proxy. Here are the top few:

  • Nginx – This is the hot favorite right now for the top busiest websites (as per Netcraft Jan 2018 survey). We’ve used it for small and large content heavy sites. It has proven to be reliable against traffic spikes and is a safe bet because of its stability and customizability.
  • Varnish – A bit more complex than Nginx to deploy, but sites with heavy traffic and a lot of content (eg. online publishers) can see a considerable gain in speed with Varnish.
  • Lighttpd – If you have a monster site, and resource usage spikes are common, Lighttpd can help you out. It’s lightweight and not likely to drag down the server.

Of course, there are way many more options such as Squid, Apache or IIS, but these are the most popular and successful options.

3. Choose the right application server

Many application owners use apps that are installed by default in their servers.

For eg., CentOS servers use PHP 5.4, not the latest PHP 7.2 with FPM (FastCGI Process Manager) that has enormous speed advantages.

VPS, Cloud and Dedicated server owners are often unaware of the differences and keep trying to optimize their site code to fix speed issues.

By just changing the application server, tweaking the settings to match the site load, and enabling cache, we’ve been able to improve application load speeds by more than 100% in some cases.

If you have never changed your default application settings, maybe you have a low hanging fruit right there.

4. Fine tune your web server

Almost all Linux servers have Apache as the web server, and here are a few settings we audit and tweak.

  • Timeout – This setting determines how long Apache will wait for a visitor to send a request. This has to be set based on the server traffic. In busy servers, we set it up to 120 seconds, but it is best to keep this value as low as possible to prevent resource wastage.
  • KeepAlive – When “KeepAlive” is set to “On”, Apache uses a single connection to transfer all the files to load a page. This saves time in establishing a new connection for each file.
  • MaxKeepAliveRequests – This setting determines how many files can be transferred via a KeepAlive connection. Unless there’s a reason not to (like resource constraints), this setting can be set as “unlimited”.
  • KeepAliveTimeout – This setting makes sure that a KeepAlive connection is not abused. It says how long should Apache wait for a new request before it resets the connection. In heavily loaded servers, we’ve found 10 secs to be a good limit.
  • MaxClients – This setting tells Apache how many visitors can be served simultaneously. Setting it too high will cause resource wastage, and setting it too low will result in lost visitors. So we set it at an ideal value based on the visitor base.
  • MinSpareServers & MaxSpareServers – Apache keeps a few “workers” on standby to handle a sudden surge of requests. If your site is prone to visit spikes, configure these variables. In heavily loaded servers, we’ve found the MinSpareServers value of 10 and MaxSpareServers value of 15 to be a good limit.
  • HostnameLookups – Apache can try to find out the hostname of every IP that connects to it, but that would be a wastage of resources. To prevent that, we set HostnameLookups to “0”.


5. Tun on HTTP/2

This is an extension to the above point but gets its own heading because HTTP/2 is a fairly recent development and not many are aware of its benefits.

All web servers right now use HTTP protocol v1.1 by default. But they all have support for HTTP v2, which is the latest version and contains a ton of performance improvements.

HTTP/2 improves server response time by:

  • Using a single connection instead of time-consuming parallel connections to transfer files.
  • Transferring important files first to complete a page.
  • Using compression to speed up header transfer.
  • Using binary data instead of bulky text data transfer.
  • “PUSH” in all files needed to render a page before it is requested by the browser. It saves valuable seconds on sites using multiple CSS, JS, and images (which is basically all modern sites).

In addition, HTTP/2 will require you to use SSL, which will make your site secure by default.

So, it’s really a no-brainer to use HTTP/2.

However, you need to keep in mind several things you need to configure when setting up HTTP/2. Some of these are:

  • Switching the whole site to HTTPS. You’ll need to set redirects for site links. Also, you can save money by using free SSL from Let’s Encrypt.
  • Make sure your reverse proxies are also properly configured for HTTP/2.
  • Upgrade your web server to a version that supports server PUSH. (Nginx supports it in v1.13.9.

6. Defragment your database tables & optimize server settings

All modern websites use databases to store site content, product data and more.

Every day, visitors post new comments, webmasters add new pages, modify or remove older pages and add or remove listed products.

All this activity leaves “holes” in the database tables. These are little gaps where a data was deleted but was never filled back in. It is called “fragmentation” and can cause longer data to fetch times.

Database tables that have more than 5% of its size as “holes” should be fixed.

So, every month (at least), check your database tables for fragmentation and run an optimization query. It’ll keep your site from turning sluggish.

Optimize database server settings

Every time you upgrade your web application or add a new plugin or module, the kind of queries executed on the database changes. And as the traffic to your site grows, the number of queries executed on the database increases.

That means the load on your database keeps changing as your site grows older and more complex. If your database settings are not adjusted to accommodate these changes, your site will run into Memory or CPU bottlenecks.

That is why it is important to monitor database metrics such as query latency, slow queries, memory usage, etc. and make timely setting changes to prevent issues.

Some of the commonly modified database settings are:

  • max_connections – In multi-user servers, this setting is used to prevent a single user hogging the entire server. In heavily loaded shared servers, this limit can be as low as 10, and in dedicated servers, it can be as high as 250.
  • innodb_buffer_pool_size – In MySQL databases enabled with InnoDB, query results are stored in a memory area called “buffer pool” for fast access. We set this value anywhere between 50-70% of available RAM for MySQL.
  • key_buffer_size – This setting determines the cache size for MyISAM tables. This is set approximately at 20% of available memory of MySQL.
  • query_cache_size – This is enabled only for single website servers, and is set to 10MB or less, depending on how slow the queries are at present.

If your database has not been optimized in a while, your site might be due for one.

7. Fix your DNS query speed

200 milliseconds  That’s how fast Google wants your server to respond. It’s pretty much the standard now.

And do you know what’s the biggest threat to that kind of loading speed? DNS queries.

Ideally, your site’s DNS should respond in 30 milliseconds or less, but a lot of sites go well beyond the 200 ms mark for DNS resolution. This is especially true for traffic from outside their hosted country.

The primary hurdle here is distance. As the distance between browser and DNS server increases, it’ll take more time for execution.

The only real way to fix this is to use a distributed DNS cluster. Get 3 low-cost VPS servers in different parts of the world (Europe, America, Australia), and then configure master-slave DNS servers in all of them.

Then super optimize it for fast execution. The details of how to do it are well beyond the scope of this article.

8. Trim down your site’s critical rendering path

“Critical rendering path” is a scary sounding phrase, but it’s simple really.

Your site’s index.html loads first. In it, there’ll be links to CSS, JS and images files in your site. Those CSS files may have other links.

The lower the number of files (and their size) needed to load your site, the better. That’s what “Optimizing Critical Rendering Path” means.

So, if your site has a lot of plugins or visual effects, you can be pretty sure that your site might need a bit of optimization.

You can do it by:

  • Deleting unused themes and plugins.
  • Reducing the size of images.
  • Combining and minimizing JS and CSS files
  • Compressing these files on disk
  • Deferring files not needed on the second scroll using “async” or “defer” methods.


9. Disable resource-intensive services

Many server owners don’t mess with the default settings in a server. So, they wouldn’t disable services they never use. It will sit there consuming memory and CPU.

And some even add services like backup and analytics on top of that – which often runs during peak site traffic times.

Fixing this is an easy win.

Look for all the services enabled in your server, and disable the ones you don’t need.

10. Upgrade your hard disk to SSD

OK, as the last point, take a look at your hard disk.

The biggest drag on server performance is disk I/O. That is the time taken for the hard disk to spin, spin and spin to collect all the data your site needs.

In 2018, you don’t have to wait for that. There are SSD storages which functions much like the server memory. No spinning.

So, get an SSD disk for at least your database partition. It alone can cut down your load time by close to 10%.


Hire a server admin to take care of your server.
And let them keep it top-notch.

 

Was this answer helpful? 0 Users Found This Useful (0 Votes)