resource-timing icon indicating copy to clipboard operation
resource-timing copied to clipboard

consider adding attributes to track Cache API performance in a service worker FetchEvent handler

Open wanderview opened this issue 7 years ago • 6 comments

It would be nice to give sites some more insight into the time spent in a service worker FetchEvent handler. Currently browser mostly (see #119) provide these two PerformanceResourceTiming markers:

  • requestStart marks when the FetchEvent is dispatched
  • responseStart marks when the evt.respondWith() promise resolves

For sites using a service worker to offline their resources the fetch handler will often be a mix of various javascript and cache API operations like:

evt.respondWith(async function() {
  let c = await caches.open("foo");
  return c.match(evt.request);
});

Its not easy to associate the cache.open() time with the FetchEvent, but we could add PerformanceResourceTiming attributes like:

  • cacheMatchStart marks the beginning of the cache.match() or cache.matchAll() that produced the Response passed to evt.respondWith()
  • cacheMatchEnd marks when the cache.match() promise resolved the Response passed to evt.respondWith()

This would require setting some internal timing information on the Response. This information would be copied automatically across a Response.clone(). Any other consumption and re-packing of the Response would lose the information, though.

wanderview avatar Aug 16 '18 19:08 wanderview

@asutherland @mrbkap @bakulf @youennf @toddreifsteck @aliams

What do you all think? I can put together a spec text PR and WPT if folks think this is a reasonable approach.

wanderview avatar Aug 16 '18 19:08 wanderview

Sounds reasonable and useful.

I think an interesting question is whether it's worth including a third number like cacheMatchFirstProcessed in there that would help indicate when there is a delay in processing the match() calls in the back-end due to a foot-gun Cache.keys() operation dominating the Cache backend with a ton of IO/allocations/etc. The answer to that question is probably "no", but it does raise the issue of how we can make it easier to diagnose that pathological situation.

asutherland avatar Aug 20 '18 20:08 asutherland

I think an interesting question is whether it's worth including a third number like cacheMatchFirstProcessed in there that would help indicate when there is a delay in processing the match() calls in the back-end due to a foot-gun Cache.keys() operation dominating the Cache backend with a ton of IO/allocations/etc. The answer to that question is probably "no", but it does raise the issue of how we can make it easier to diagnose that pathological situation.

Interesting idea, but I think I'd prefer to start with something minimal and expand to more granular information like this if necessary in the future.

wanderview avatar Sep 05 '18 14:09 wanderview

Here's another possible approach that is simple, but might be a bit more flexible than the original comment above:

  • workerCacheMatchStart contains the time cache.match(), cache.matchAll(), or caches.match() were called iff the FetchEvent was fulfilled with a Response from one of these APIs.
  • workerFetchStart contains the time fetch() was called iff the FetchEvent was fulfilled with a Response produced by fetch().
  • workerResponseCreated contains the time the Response used to fulfill the Fetchevent was created. This could be when new Response() is called or when methods cache.match() or fetch() resolve their promise.
  • workerResponseCloned contains the time the Response used to fulfill the FetchEvent was cloned.

Some examples of what this would look like in practice:

//
// service worker script
//
self.addEventListener('fetch', evt => {
  if (evt.request.url.endsWith('cache')) {
    evt.respondWith(caches.match(evt.request));
    return;
  }

  if (evt.request.url.endsWith('fetch')) {
      evt.respondWith(fetch(evt.request));
      return;
  }

  if (evt.request.url.endsWith('constructor')) {
    evt.respondWith(new Response('hello world'));
    return;
  }
});

//
// main window script
//
async function test() {
  // Results in the following PerformanceResourceTiming attributes:
  //  * requestStart: time FetchEvent is fired
  //  * workerCacheMatchStart: time caches.match() begins
  //  * workerResponseCreated: time caches.match() resolve the Response
  //  * responseStart: time respondWith promise is fulfilled with a Response
  const cacheURL = baseURL + "?q=cache";
  await fetch(cacheURL);
  let cacheEntry = performance.getEntriesByName(cacheURL)[0];
  console.log(cacheEntry);

  // Results in the following PerformanceResourceTiming attributes:
  //  * requestStart: time FetchEvent is fired
  //  * workerFetchStart: time fetch() begins
  //  * workerResponseCreated: time fetch() resolve the Response
  //  * responseStart: time respondWith promise is fulfilled with a Response
  const fetchURL = baseURL + "?q=fetch";
  await fetch(fetchURL);
  let fetchEntry = performance.getEntriesByName(fetchURL)[0];
  console.log(fetchEntry);

  // Results in the following PerformanceResourceTiming attributes:
  //  * requestStart: time FetchEvent is fired
  //  * workerResponseCreated: time `new Response()` is called
  //  * responseStart: time respondWith promise is fulfilled with a Response
  const constructorURL = baseURL + "?q=constructor";
  await fetch(constructorURL);
  let constructorEntry = performance.getEntriesByName(constructorURL)[0];
  console.log(constructorEntry);

  // Results in the following PerformanceResourceTiming attributes:
  //  * requestStart: time FetchEvent is fired (or time network fallback request starts???)
  //  * responseStart: time fallback network response is available
  const fallbackURL = baseURL + "?q=fallback";
  await fetch(fallback);
  let fallbackEntry = performance.getEntriesByName(fallbackURL)[0];
  console.log(fallbackEntry);
}

Of course, all of these examples are somewhat silly since the Response creation process encompasses the entire fetch event handler. In practice, though, real web sites do substantially more work in their fetch event handlers for analytics, etc. It would be useful to measure the Response creation bits separately from the overall time.

Edit: Updated example to make it clearer...

wanderview avatar Sep 05 '18 15:09 wanderview

Let's discuss this at TPAC. Added it as an item on the proposals list

yoavweiss avatar Sep 07 '18 13:09 yoavweiss

Thank you. I'll write an explainer for my larger "workerTiming" proposal to include in that discussion as well. It seemed like some folks were more excited/interested in that then this modest proposal.

wanderview avatar Sep 07 '18 13:09 wanderview