pystack icon indicating copy to clipboard operation
pystack copied to clipboard

Devise a test harness for testing performance

Open godlygeek opened this issue 1 year ago • 5 comments

We know there's low hanging fruit available for optimizing PyStack, but we currently have no good way to benchmark our performance and quantify any improvements. Design some sort of a test harness that can be used for measuring the performance impact of our changes, possibly using https://asv.readthedocs.io/en/stable/

godlygeek avatar Apr 23 '23 23:04 godlygeek

I'll look into this today and tomorrow.

Helithumper avatar Apr 25 '23 20:04 Helithumper

Wasn't able to look into this. Feel free to unsassign

Helithumper avatar Aug 30 '23 05:08 Helithumper

How is the impact of the performance impact of changes defined? Is it meant to be a time and space benchmark against a predefined use? Also is it meant to be integrated into a github workflow or is it meant to be a separate module of its own?

ms2892 avatar Jan 13 '24 01:01 ms2892

How is the impact of the performance impact of changes defined? Is it meant to be a time and space benchmark against a predefined use? Also is it meant to be integrated into a github workflow or is it meant to be a separate module of its own?

My understanding is that we want to have something similar to the CodeCov integration, where we can easily tell, for any given PR, if a number of predefined measurements differ when measured on main vs on the PR branch. This way, we can both have a better understanding on the current time and space performance of PyStack, and be aware if we are significantly changing them with a PR.

Any PR whose goal is to change performance would be close to meaningless without a way of measuring the difference.

sarahmonod avatar Jan 25 '24 17:01 sarahmonod

How is the impact of the performance impact of changes defined? Is it meant to be a time and space benchmark against a predefined use? Also is it meant to be integrated into a github workflow or is it meant to be a separate module of its own?

My understanding is that we want to have something similar to the CodeCov integration, where we can easily tell, for any given PR, if a number of predefined measurements differ when measured on main vs on the PR branch. This way, we can both have a better understanding on the current time and space performance of PyStack, and be aware if we are significantly changing them with a PR.

Any PR whose goal is to change performance would be close to meaningless without a way of measuring the difference.

Hi Gus,

I implemented it in the PR https://github.com/bloomberg/pystack/pull/165. Please have a look

ms2892 avatar Jan 25 '24 17:01 ms2892