WIP - differential timestamp compression when serializing processed profiles
Just a WIP to share the code and have a deploy preview for feedback. Flow and tests don't pass (yet).
For https://profiler.firefox.com/public/k56pyq04p6yey31fxez6gex1ab53avx14ghmsb8/ (a power profile coming from the gecko profiler on Android) the gzipped size drops from 82.2MB to 77.9MB (5% win)
For https://profiler.firefox.com/public/qpws4tqwgah43y1q22p2e9nhyc3j6ma3wwe38nr/ (a power profile from a USB power meter with µs timestamp precision on the samples) the gzipped size drops from 134MB to 93.5MB (30% win). Note: without the PR, I could serialize and compress 4h of this profile (resulting in a 165.6MB file I can download but not upload. Longer timer ranges failed with OOM errors). With the PR I had OOM errors when serializing 4h and I reduced the range a bit. 3h20min is the maximum time range of this profile that can be serialized with the PR without running into OOM errors.
For https://profiler.firefox.com/public/83t636whpa0h6vbcvxr32nz29q3wy116arvtfp0/ (a power profile from a smart power plug, with a perfectly consistent 1s sampling rate) the gzipped size drops from 61.8kB to 8.38kB (86% win).
I think we'll have add some kind of upgrader support for the serialized format if we rely on more of its structure. At the moment, the pipeline is: JSON.parse -> convert to unserialized -> run processed upgrader -> use processed format
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 88.42%. Comparing base (
662dc4a) to head (6670371).
Additional details and impacted files
@@ Coverage Diff @@
## main #5033 +/- ##
=======================================
Coverage 88.41% 88.42%
=======================================
Files 304 304
Lines 27552 27574 +22
Branches 7450 7456 +6
=======================================
+ Hits 24361 24383 +22
Misses 2963 2963
Partials 228 228
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I think we'll have add some kind of upgrader support for the serialized format if we rely on more of its structure. At the moment, the pipeline is: JSON.parse -> convert to unserialized -> run processed upgrader -> use processed format
I was confused in this comment and in other comments on this PR. The processed format upgraders run before the conversion from the serialized format. The upgraders see timeDeltas, stringArray etc. I'll put up a PR to remove the misleading comment. Edit: PR #5285
(Nothing is broken, as far as I can tell. I was just confused on this point.)