Web page caching gets tricky once personalization is involved. Lets take twitter public_timeline for example which seems to be perfect for caching. Unfortunately when a user is logged in, it also shows the userâ€™s information. Caching that particular page in its entirety, on the web server, in such scenarios, may not be an option. Another scenario is where parts of a page might expire faster than other (require different cache TTLs). Here again caching the whole page doesnâ€™t help.
Edge side includes(ESI) is a markup language specifically designed to help web servers assemble dynamic content at the web layer.
The above ESI tag is similar to tags in jsp/php/etc which allow one page to refer to another page for parts of the content on the page. By breaking up the page into smaller objects the webserver could apply different TTL settings (and user validation) to different parts of content. Twitter used to (and may still ) use â€œVarnishâ€ which supports a subset of ESI specification out of the box.
But caching on the webserver may not be the real reason why this language was invented. ESI is also supported by AkamaiÂ (CDN) on its edge caching product.Â By allowing Akamai edge nodes to do the assembling close to the user, they significantly improve perceived end user performance without giving up personalization or content freshness requirements.
A few weeks ago the company I work with noticed a weird problem with its CDN (Content Delivery Network) provider. They noticed that HEAD requests were being responded to by the CDN edge nodes using objects in the cache which had already expired. Whats worse is that even after an explicit content expiry notification was sent, the HEAD responses were still wrong. Long story short, the CDN provider had to setup bypass rules for the HEAD requests so that it always bypasses the cache. There was a slight performance overhead with this, but the workaround solved the problem.
Now while this was going on, one of the guys at the CDN support helping us mentioned something about Etags and why we should be using it. I didn’t understand how Etags would solve the problem if the CDN itself had a bug which was ignoring expiry information, but I said I’ll investigate.
Anyway, the traditional way of communicating object expiry is using the Last-Modified timestamp. ETags is another way of doing that, except that its more accurate.
A little more digging explained that ETags is not a hash of the contents of the file, but a combination of file’s inode, file size and last-modified timestamp. This is definitely more accurate and I could see why this might be better than just having last-modified timestamp. But what the CDN support guy didn’t mention is that if you are serving content from multiple webservers, even if you rsync the content between the servers, the Etags will always be different because rsync or any other standard copy commands don’t have control over the inode number used.
A little more search on the net confirmed that this is a problem and that ETags should probably be shut off (or modified such that it doesn’t use inodes) on servers behind loadbalancers.