Detecting browser bandwidth (in perl)

If your website has file downloads in megabytes, it can take multiple minutes to download from far away places. Detecting user's bandwidth and predicting the time it might take might become essential to help your customers understand why its taking so long. Detecting bandwidth of a client could be as simple as timing a downloading of a simple file. But there are a few problems with this.

To begin with, most browsers can open multiple download threads to the same destination (IE uses 2, Firefox uses 4). This is not a problem, but its good to know. Then there is a TCP start/stop overhead, impact of which can be minimized by using large files and enabling keepalive. The biggest problem however is caching intelligence within the browser which can trick detection logic to think that it has a superfast network connectivity. The same problem can also confuse multiple browsers behind a caching proxy server.

The solution to all of these problems are relatively simple. First of all use multiple file downloads to maximize the usage of all the browser threads to the server. Enable Keepalives on the server to minimize TCP restart overheads. Use relatively large files for sampling and finally use random numbers as URL parameters to force the cache to discard previous version of the file from cache "?randomnumbers"

Comments

Popular posts from this blog

Chrome Frame - How to add command line parameters

Creating your first chrome app on a Chromebook

Brewers CAP Theorem on distributed systems