- HTTP Keepalives: If Http Keepalives are not turned on, you can get 30% to 50% improvements just by turning this on. Keepalives allow multiple HTTP requests to go over the same TCP/IP connection. Since there is a performance penalty for setting up new TCP/IP connections, using Keepalives will help most websites.
- Compression: Enabling compression can dramatically speed up sites which transfer large web objects. Compression doesn't help much on a site with lots of images, but it can do wonders in most text/html based websites. Almost all webservers which do compression automatically detect browsers compatibility before they compress data in HTTP. Most browsers since 1999 which support HTTP 1.1 support compression too by default. In real life, however, I've noticed some plugins can create problems. An excellent example is Adobe's PDF plugin which inconsistently failed to open some PDFs on our website when compression was enabled. In apache its easy to define which objects should not be compressed, so setting up workarounds are simple too.
- Multiple Servers: If you can't reduce the number of objects try to distribute your content over multiple servers. Since most browsers have an upper-limit on the number of open connections to a single server, they may ignore that limit if some objects are from different server. For example what would happen if an HTML page which has 4 jpeg images is using server1.domain.com and server2.domain.com for 2 images each instead of putting all of them on one server ? In most browsers cases you will notice 2 times speed improvement. Firefox and IE browsers can both be modified to increase this limit, but you can't ask each of your visitors to do that.
- Caching: Enabling expiry HTTP header on objects can intelligently tell browsers to cache the objects for a predefined duration. If your site doesn't change very often, or if there are a certain set of pages or objects which change less frequently, change the expiry header associated with that file type to mention that. Browsers visiting your site should see speed improvements almost immediately. I've seen sites with more than 50 image objects in a single HTML file doing amazingly well due to browser caching.
- Static Objects on fast webserver: Web applications servers are almost always proxied through a webserver. While web application servers can do a good job of providing dynamic content, they are not the best suited to service static objects. In most cases you can see significant speed improvements if you offload static content to the webserver which can do the same job more efficiently. Adding more application servers behind a loadbalancer can do the same trick too. While at the topic, please remember the language you chose to serve your application can make or break your business. While protoyping can be done in almost any language, heavily used websites should investigate performance, productivity and security gain/loss of moving to other platforms/languages like Java/.Net/C/C++.
- TCP/IP initial window size: The default initial TCP/IP Window sizes on most operating systems are conservatively defined and can affect download/upload speed problems. TCP/IP starts with a low window size and tries to find an optimal window size over time. Unfortunately since the initial value is set to a low value and since HTTP connections don't last that long, setting the initial value to a higher value can dramatically speed up transmission to remote high latency networks.
- Global Loadbalancing: If you have already invested in some kind of simple loadbalancing technology and are still having performance problems, start investigating in global loadbalancing which allows you to deploy multiple servers around the world and use intelligent loadbalancing devices to route client traffic to closest web server. If your organization can't afford to setup multiple websites around the world, investigate global caching services like Akamai
- Webserver Log Analysis: Make it a habit to analyse your webserver logs on a regular basis to look for errors and bottlenecks. You would be surprised how much you can learn about your own site by looking at your logs. One of the first things I look for are objects which are requested the most or objects which consume the most bandwidth. Compression and Expiry can both help in this case. I regularly look for 404s and 500s to see for missing pages or application errors. Understanding where your customers are coming from (country) and what times they like to come in at can help you understand latency or packet loss problems. I use awstats for my log analysis.
[p.s: This site royans.net unfortunately is not physically maintained by me, so I have limited control to make changes on it.]