Own The Cache


I appreciate what the browser does for me caching what it can, but man that black box feels… well too black boxy. It has a complex algorithm that attempts to work out what it should keep and what it can safely purge, but I know it isn’t optimal for me and my needs as an end user and a developer trying to serve users.

I think that with recent advancements we are getting closer to being able to really own the cache to better serve everyone:

Pre-fill the cache

Just as a teacher needs to be one class ahead of the student, if you can fill up the cache ahead of time the experience will feel fantastic. You can tell the browser to do just that with the various resource hints (prefetch, prerender etc), and with service workers you can make more fine-grained control with programatic access to a cache. It is a common pattern to go ahead and cache as many static assets as possible on the install of a service worker. You want to make sure the service worker JavaScript file itself isn’t cached too far in advance, but everything else that you can should be far future expired with a unique URL.

http://caniuse.com/#feat=link-rel-prerender

Cross Domain Caching

Your service worker can intercept URLs outside of your scope, which allows you to cache beyond your domain. But if both of us have a service worker, we are both getting and caching those Google Fonts. We can go one better with the new foreign fetch API. This allows Google to have a service worker that caches the fonts once, and everyone benefits from it. This should offer huge leverage for the common assets that we use on the Web, especially the bulky ones like fonts.

Back in the day I worked with the team to create the Hosted Libraries as a CDN for common JavaScript libraries. The thinking was, if we could get a lot of people using the same URL for jQuery version x.y.z, then there would be a higher likelihood that it would be in the cache, and thus available immediately.

We also talked a lot about the idea of not relying on the URL. What if we could put a hash into the script tag, thus allowing any URL that had the same file to reuse what was going on? Hmm, well, how do you know if the file is right without downloading it (kinda defeating the point).

But it turns out that the hash is useful in another way, and thus we now have Subresource Integrity, where we can at least make sure that the downloaded file is the one that you wanted to download and execute.

http://caniuse.com/#feat=subresource-integrity

Oops

Have you ever made a mistake and set a far future header on the wrong resource by mistake? It is quite scary that a bad header can mean that a file is lost to some browser caches for good. Whenever making important changes to headers I feel like I am on a mission in space where one wrong move and the tether to my ship will my users will drift off into space with no way for me to fix them.

I would love a way for my service worker to be able to be that tether, so I could tell the browser “hey, if you have a file that matches this URL, can you reset the cache header because I messed up? er, kthankssorry!

In general I love the trend of empowering developers to do what makes sense for them, by getting more and more access to the browser, but only when needed.

Leave a comment

Your email address will not be published. Required fields are marked *