pulumi icon indicating copy to clipboard operation
pulumi copied to clipboard

Add simple performance gate to integration tests

Open julienp opened this issue 1 year ago • 3 comments

Ref https://github.com/pulumi/pulumi/issues/15347

I ran the tests 5 times in CI to gather times:

TestPerfEmptyUpdate

5.06 5.11 4.82 4.95 5.74

TestPerfManyComponentUpdate

16.7 17.1 16.62 16.28 16.29

TestPerfParentChainUpdate

17.58 17.58 17.46 17.23 17.34

This shows fairly consistent times. I took the slower run for each test and added 10% and rounded up to set the thresholds. Very scientific 🧑‍🔬

julienp avatar Sep 25 '24 12:09 julienp

Maybe we should run these in a separate job, and sequentially, to avoid too much noise and contention with other tests.

julienp avatar Sep 25 '24 14:09 julienp

Running with NoParallel doesn't seem to be enough to make these pass :/

julienp avatar Sep 26 '24 12:09 julienp

I'm a little curious about the choice of Python for these performance tests. Are we planning to add more for NodeJS/Go? Or are we just testing the engine here (in which case Go might be better since we don't need to spin up as much language runtime?)

Sorry, I missed this. There's no real reason for Python over other languages, it's just what I picked for to create the examples.

I think it could make sense to follow up with a test for each language. I suspect the most useful thing would be to come up with more complex scenarios in the programs, cases where things should be fast because they happen in parallel, but that would slow down if we started processing something serially.

julienp avatar Oct 01 '24 14:10 julienp