resource-timing
resource-timing copied to clipboard
consider adding attributes to track Cache API performance in a service worker FetchEvent handler
It would be nice to give sites some more insight into the time spent in a service worker FetchEvent handler. Currently browser mostly (see #119) provide these two PerformanceResourceTiming markers:
requestStartmarks when the FetchEvent is dispatchedresponseStartmarks when theevt.respondWith()promise resolves
For sites using a service worker to offline their resources the fetch handler will often be a mix of various javascript and cache API operations like:
evt.respondWith(async function() {
let c = await caches.open("foo");
return c.match(evt.request);
});
Its not easy to associate the cache.open() time with the FetchEvent, but we could add PerformanceResourceTiming attributes like:
cacheMatchStartmarks the beginning of thecache.match()orcache.matchAll()that produced theResponsepassed toevt.respondWith()cacheMatchEndmarks when thecache.match()promise resolved theResponsepassed toevt.respondWith()
This would require setting some internal timing information on the Response. This information would be copied automatically across a Response.clone(). Any other consumption and re-packing of the Response would lose the information, though.
@asutherland @mrbkap @bakulf @youennf @toddreifsteck @aliams
What do you all think? I can put together a spec text PR and WPT if folks think this is a reasonable approach.
Sounds reasonable and useful.
I think an interesting question is whether it's worth including a third number like cacheMatchFirstProcessed in there that would help indicate when there is a delay in processing the match() calls in the back-end due to a foot-gun Cache.keys() operation dominating the Cache backend with a ton of IO/allocations/etc. The answer to that question is probably "no", but it does raise the issue of how we can make it easier to diagnose that pathological situation.
I think an interesting question is whether it's worth including a third number like cacheMatchFirstProcessed in there that would help indicate when there is a delay in processing the match() calls in the back-end due to a foot-gun Cache.keys() operation dominating the Cache backend with a ton of IO/allocations/etc. The answer to that question is probably "no", but it does raise the issue of how we can make it easier to diagnose that pathological situation.
Interesting idea, but I think I'd prefer to start with something minimal and expand to more granular information like this if necessary in the future.
Here's another possible approach that is simple, but might be a bit more flexible than the original comment above:
workerCacheMatchStartcontains the timecache.match(),cache.matchAll(), orcaches.match()were called iff the FetchEvent was fulfilled with a Response from one of these APIs.workerFetchStartcontains the timefetch()was called iff the FetchEvent was fulfilled with a Response produced byfetch().workerResponseCreatedcontains the time the Response used to fulfill the Fetchevent was created. This could be whennew Response()is called or when methodscache.match()orfetch()resolve their promise.workerResponseClonedcontains the time the Response used to fulfill the FetchEvent was cloned.
Some examples of what this would look like in practice:
//
// service worker script
//
self.addEventListener('fetch', evt => {
if (evt.request.url.endsWith('cache')) {
evt.respondWith(caches.match(evt.request));
return;
}
if (evt.request.url.endsWith('fetch')) {
evt.respondWith(fetch(evt.request));
return;
}
if (evt.request.url.endsWith('constructor')) {
evt.respondWith(new Response('hello world'));
return;
}
});
//
// main window script
//
async function test() {
// Results in the following PerformanceResourceTiming attributes:
// * requestStart: time FetchEvent is fired
// * workerCacheMatchStart: time caches.match() begins
// * workerResponseCreated: time caches.match() resolve the Response
// * responseStart: time respondWith promise is fulfilled with a Response
const cacheURL = baseURL + "?q=cache";
await fetch(cacheURL);
let cacheEntry = performance.getEntriesByName(cacheURL)[0];
console.log(cacheEntry);
// Results in the following PerformanceResourceTiming attributes:
// * requestStart: time FetchEvent is fired
// * workerFetchStart: time fetch() begins
// * workerResponseCreated: time fetch() resolve the Response
// * responseStart: time respondWith promise is fulfilled with a Response
const fetchURL = baseURL + "?q=fetch";
await fetch(fetchURL);
let fetchEntry = performance.getEntriesByName(fetchURL)[0];
console.log(fetchEntry);
// Results in the following PerformanceResourceTiming attributes:
// * requestStart: time FetchEvent is fired
// * workerResponseCreated: time `new Response()` is called
// * responseStart: time respondWith promise is fulfilled with a Response
const constructorURL = baseURL + "?q=constructor";
await fetch(constructorURL);
let constructorEntry = performance.getEntriesByName(constructorURL)[0];
console.log(constructorEntry);
// Results in the following PerformanceResourceTiming attributes:
// * requestStart: time FetchEvent is fired (or time network fallback request starts???)
// * responseStart: time fallback network response is available
const fallbackURL = baseURL + "?q=fallback";
await fetch(fallback);
let fallbackEntry = performance.getEntriesByName(fallbackURL)[0];
console.log(fallbackEntry);
}
Of course, all of these examples are somewhat silly since the Response creation process encompasses the entire fetch event handler. In practice, though, real web sites do substantially more work in their fetch event handlers for analytics, etc. It would be useful to measure the Response creation bits separately from the overall time.
Edit: Updated example to make it clearer...
Let's discuss this at TPAC. Added it as an item on the proposals list
Thank you. I'll write an explainer for my larger "workerTiming" proposal to include in that discussion as well. It seemed like some folks were more excited/interested in that then this modest proposal.