Top Ten ways to speed up your website

Over last few years as a web admin, I realized that knowing HTML and javascript alone was not enough to build a fast website. To make the site faster one needs to understand the real world problems like network latency and packet loss which is usually ignored by most web administrators. Here are 10 things you should investigate before you call your website perfect. Some of these are minor configuration changes, others might require time and resource to implement.

  1. HTTP Keepalives: If Http Keepalives are not turned on, you can get 30% to 50% improvements just by turning this on. Keepalives allow multiple HTTP requests to go over the same TCP/IP connection. Since there is a performance penalty for setting up new TCP/IP connections, using Keepalives will help most websites.

  2. Compression: Enabling compression can dramatically speed up sites which transfer large web objects. Compression doesn't help much on a site with lots of images, but it can do wonders in most text/html based websites. Almost all webservers which do compression automatically detect browsers compatibility before they compress data in HTTP. Most browsers since 1999 which support HTTP 1.1 support compression too by default. In real life, however, I've noticed some plugins can create problems. An excellent example is Adobe's PDF plugin which inconsistently failed to open some PDFs on our website when compression was enabled. In apache its easy to define which objects should not be compressed, so setting up workarounds are simple too.

  3. Number of Objects: Reduce the number of objects per page. Most browsers can't download more than 2 objects at a time (RFC 2616). This may not seem like a big deal, but if you are managing a website which has international audience, network latency can dramatically slow down the load-time for the page. The other day I checked on google's search page and noticed that they had only one image file in addition to the html page. That's an amazingly lean website. In real life all sites can't be like that, but using image maps with javascript to simulate buttons can do wonders. Merging HTML, Javascripts and CSS into a single file are other common ways of reducing objects. Most modern sites today avoid using images entirely for buttons and stick made of HTML/CSS/Javascript.

  4. Multiple Servers: If you can't reduce the number of objects try to distribute your content over multiple servers. Since most browsers have an upper-limit on the number of open connections to a single server, they may ignore that limit if some objects are from different server. For example what would happen if an HTML page which has 4 jpeg images is using server1.domain.com and server2.domain.com for 2 images each instead of putting all of them on one server ? In most browsers cases you will notice 2 times speed improvement. Firefox and IE browsers can both be modified to increase this limit, but you can't ask each of your visitors to do that.

  5. AJAX: Using AJAX won't always speed up your website, but having javascript respond to users click immediately can make it feel very responsive. Most interactive sites are using AJAX technologies today than they were before. In some cases, sites using Java and Flash have moved to AJAX to do the same work in lesser number of bytes.

  6. Caching: Enabling expiry HTTP header on objects can intelligently tell browsers to cache the objects for a predefined duration. If your site doesn't change very often, or if there are a certain set of pages or objects which change less frequently, change the expiry header associated with that file type to mention that. Browsers visiting your site should see speed improvements almost immediately. I've seen sites with more than 50 image objects in a single HTML file doing amazingly well due to browser caching.

  7. Static Objects on fast webserver: Web applications servers are almost always proxied through a webserver. While web application servers can do a good job of providing dynamic content, they are not the best suited to service static objects. In most cases you can see significant speed improvements if you offload static content to the webserver which can do the same job more efficiently. Adding more application servers behind a loadbalancer can do the same trick too. While at the topic, please remember the language you chose to serve your application can make or break your business. While protoyping can be done in almost any language, heavily used websites should investigate performance, productivity and security gain/loss of moving to other platforms/languages like Java/.Net/C/C++.

  8. TCP/IP initial window size: The default initial TCP/IP Window sizes on most operating systems are conservatively defined and can affect download/upload speed problems. TCP/IP starts with a low window size and tries to find an optimal window size over time. Unfortunately since the initial value is set to a low value and since HTTP connections don't last that long, setting the initial value to a higher value can dramatically speed up transmission to remote high latency networks.

  9. Global Loadbalancing: If you have already invested in some kind of simple loadbalancing technology and are still having performance problems, start investigating in global loadbalancing which allows you to deploy multiple servers around the world and use intelligent loadbalancing devices to route client traffic to closest web server. If your organization can't afford to setup multiple websites around the world, investigate global caching services like Akamai

  10. Webserver Log Analysis: Make it a habit to analyse your webserver logs on a regular basis to look for errors and bottlenecks. You would be surprised how much you can learn about your own site by looking at your logs. One of the first things I look for are objects which are requested the most or objects which consume the most bandwidth. Compression and Expiry can both help in this case. I regularly look for 404s and 500s to see for missing pages or application errors. Understanding where your customers are coming from (country) and what times they like to come in at can help you understand latency or packet loss problems. I use awstats for my log analysis.


References:

[p.s: This site royans.net unfortunately is not physically maintained by me, so I have limited control to make changes on it.]

Comments

Popular posts from this blog

Chrome Frame - How to add command line parameters

Creating your first chrome app on a Chromebook

Brewers CAP Theorem on distributed systems