De-duplicate performance.mark and performance.measure
performance.measure() requires a start and end mark. In our marker processing step we should de-duplicate the marks that are used to create measures. Currently we are showing 3 markers for every 1 measure.
This work should most likely be done in: https://github.com/devtools-html/perf.html/blob/7d9ac63cb58629a458d96a7a9150716cdc358ee4/src/profile-logic/profile-data.js#L905-L959
┆Issue is synchronized with this Jira Task
We also need a way to go from measure markers to corresponding start and/or end markers (both are optional by the way). So unless that's already in the payload, we need a gecko patch too.
Seems like it doesn't store the markers in the payload, just the timestamps. https://searchfox.org/mozilla-central/source/dom/performance/Performance.cpp#366
#ifdef MOZ_GECKO_PROFILER
if (profiler_is_active()) {
TimeStamp startTimeStamp = CreationTimeStamp() +
TimeDuration::FromMilliseconds(startTime);
TimeStamp endTimeStamp = CreationTimeStamp() +
TimeDuration::FromMilliseconds(endTime);
profiler_add_marker(
"UserTiming",
MakeUnique<UserTimingMarkerPayload>(aName, startTimeStamp, endTimeStamp));
}
#endif
Several marks with the same name can exist at the same time. So unless the Gecko profiler would store some other reference to the marks rather than just their names, we would still need to compare the timestamps between the measure and marks in order to match them.
Perhaps a Gecko patch is unnecessary if we just compare timestamps? I'm guessing it is very unlikely that two marks would have the exact same timestamp. So if we find a measure, we can get the timestamps for the start and end, and look if those marks exists and filter them out from the returned array.
I'd be more comfortable if we can add the 2 names of the start/end markers in the payload. And yeah I agree with you: we'll need to compare timestamp too !
Filed bug 1425605
PR #858 can be resurrected to fix this, there was a correctness issue in the review, and I ended up working on other things.
Please close Bug 1421947 after this bug gets fixed.