What are the downsides to using skipWaiting and clientsClaim with Workbox?

后端 未结 1 859
小蘑菇
小蘑菇 2021-01-31 22:16

By default skipWaiting is set to false in Workbox. Assuming you\'re only using the Workbox-created service worker for caching, are there any downsides to

相关标签:
1条回答
  • 2021-01-31 22:51

    As background, I'd recommend reading "The Service Worker Lifecycle". It's written from the generic service worker perspective, but the points apply to Workbox-powered service workers as well.

    These concepts are also explained in more detail, from the perspective of using create-react-app, in this "Paying Attention while Loading Lazily" talk and microsite.

    Workbox's implementation

    Workbox uses an install event handler to cache new or updated entries in the precache manifest (appending a __WB_REVISION__ query param to the entries' URLs when needed, to avoid overwriting existing entries with different revisions), and it uses the activate event handler to delete previously cached entries that are no longer listed in the precache manifest.

    If skipWaiting is true, then activate handler, which is responsible for cleaning up outdated URLs from the cache, would be executed immediately after an updated service worker is installed.

    Under these circumstances, is there any reason to not just set skipWaiting to true?

    There are a few risks to using skipWaiting, and to err on the side of safety, we disable it by default in Workbox. (It was enabled by default in older, sw-precache-generated service workers.)

    Let's assume a scenario in which your HTML is loaded cache-first (generally a best practice), and then sometime after that load, a service worker update is detected.

    Risk 1: Lazy-loading of fingerprinted content

    Modern web apps often combine two techniques: asynchronous lazy-loading of resources only when needed (e.g. switching to a new view in a single page app) and adding fingerprints (hashes) to uniquely identify URLs based on the content they contain.

    Traditionally, it's either the HTML itself (or some JavaScript that contains route information which is also loaded early on in the page lifecycle) that contains a reference to the current list of URLs that need to be lazy-loaded.

    Let's say that the version of your web app which was initially loaded (before the service worker update happened) thinks that the URL /view-one.abcd1234.js needs to be loaded in order to render /view-one. But in the meantime, you've deployed an update to your code, and /view-one.abcd1234.js has been replaced on your server and in your precache manifest by /view-one.7890abcd.js.

    If skipWaiting is true, then /view-one.abcd1234.js will be purged from the caches as part of the activate event. Chances are that you've already purged it from your server as part of your deployment as well. So that request will fail.

    If skipWaiting is false, then /view-one.abcd1234.js will continue to be available in your caches until all the open clients of the service worker have been closed. That's usually what you'd want.

    Note: While using a service worker can make it more likely to run into this class of problem, it's an issue for all web apps that lazy-load versioned URLs. You should always have error handling in place to detect lazy-loading failures and attempt to recover by, e.g., forcibly reloading your page. If you have that recovery logic in place, and you're okay with that UX, you might choose to enable skipWaiting anyway.

    Risk 2: Lazy-loading of incompatible logic

    This is similar to the first risk, but it applies when you don't have hash fingerprints in your URLs. If you deploy an update to your HTML and a corresponding update to one of your views, existing clients can end lazy-loading a version of /view-one.js that doesn't correspond to the structure of the HTML that was fetched from the cache.

    If skipWaiting is false, then this isn't likely to happen, since the /view-one.js that's loaded from the cache should correspond to the HTML structure that was loaded from the same cache. That's why it's a safer default.

    Note: Like before, this isn't a risk that's unique to service workers—any web app that dynamically loads unversioned URLs might end up loading incompatible logic following a recent deployment.

    so what does clientsClaim do?

    clientsClaim ensures that all uncontrolled clients (i.e. pages) that are within scope will be controlled by a service worker immediately after that service worker activates. If you don't enable it, then they won't be controlled until the next navigation.

    It's usually safer to enable than skipWaiting, and can be helpful if you want the service worker to start populating runtime caches sooner rather than later.

    I'll refer you to the section in the Service Worker Lifecycle doc for more info.

    0 讨论(0)
提交回复
热议问题