pwmetrics
pwmetrics copied to clipboard
CI: Calculate metrics based on previous results.
As a part of CI it would be nice to have a feature that allows you to calculate metrics based on your previous results, so we can alert in case if perf metrics are going down significantly.
Hi. Do you wanna use gdrive for this?
So in this case we should be able also to compare expectations not just with object set in config but also with some file with expectations.
hey @denar90
I think gdrive is one of the option, but ideally you want to have several options, right? For example to store results locally. That's what I'm doing right now in my patched version of pwmetrics, or maybe you want to push it to s3.
So in this case we should be able also to compare expectations not just with object set in config but also with some file with expectations.
That's right. As a part of your calculation you will take the budget from the config file and the prev. results from the store(file/gdrive/s3/etc.).
I think gdrive is one of the option, but ideally you want to have several options, right? For example to store results locally. That's what I'm doing right now in my patched version of pwmetrics
Will be nice to take a look 👀 . Can you create a pr or smth?
I have couple questions. How to identify the latest file? Should results always be rewritten after each succeed run? How to handle travis (only gdrive or smth option, I'm not sure we can store results locally there)? Add additional handle for reading from gdrive as part of config?
@denar90
Will be nice to take a look 👀 . Can you create a pr or smth?
Sure, I'll open PR.
How to identify the latest file? Should results always be rewritten after each succeed run?
Very good question.
Each time we perform a test a new file is getting created with a name that contains a timestamp (i.e. perf_test_29.06.17.00.38.29.json).
Of course we don't want to go crazy and create millions of files, so that's why it's good to set a limit (could/should be configurable), let's say 10 files. So, this way we always keep 10 latest results (First In, Last Out).
How to handle travis (only gdrive or smth option, I'm not sure we can store results locally there)?
I'm not sure if it's valid for Travis. We can use local storage in case when we perform tests:
- From the local machine.
- From the docker container. For example, as a part of Jenkins pipeline.
Add additional handle for reading from gdrive as part of config?
It is definitely worth having it.
Good points. I'm 👍 for this way, we also can have some tuning or something but for now, we should stick to this one. cc @paulirish's_team (@samccone @pedro93) thoughts?
I'm ok with trying something like this, not exactly sure how to implement it though. I see two options:
- Tightly integrate CI metric storage which means that pwmetrics will no longer just be a console utility tool but will have a data store attached somehow (perhaps a plugin?), or add more storage related options.
- Create a plugin of sorts that will do these operations (i would prefer this), e.g: pwmetrics-storage. Use pwmetrics merely as the runner, take the output generated, store it in a filesystem, aws, google-sheets, whatever. This way the plugin can be agnostic, consuming multiple datasources.
These are just my 2 cents. I understand the desired for the functionality just not sure of whether pwmetrics as a project is the right scope for it.
@pedro93 @denar90
Create a plugin of sorts that will do these operations (i would prefer this), e.g: pwmetrics-storage. Use pwmetrics merely as the runner, take the output generated, store it in a filesystem, aws, google-sheets, whatever. This way the plugin can be agnostic, consuming multiple datasources.
❤️ it.
I would then also move google-sheets out of the pwmetrics and make it as a part of pwmetrics-storage.
If we agree on a plugin-based system, yes Google-sheets should move out. Perhaps as the first option (POC) for pwmetrics-storage
If we wanna divide pwmetrics should we create org for this? or should we create subpackages? thoughts?
@denar90 @pedro93 as for now, i think org would be the optimal solution, so we can keep plugins together and prototype fast. Is it make sense or am I missing something?
I do not have experience in managing orgs or monorepos. I'll defer to your best judgment, however does this make sense as a plugin for pwmetrics? @julianusti correct me if I'm wrong but don't you want something more general?
As in, depending on the CI system you have you create a script that runs pwmetrics, processes the output and passes that to the CI system that could, for example generate a regression performance report? In this scenario, it would be the responsibility of the CI system to store the previous reports and use that to evaluate metric variations over time.