pulumi
pulumi copied to clipboard
Add simple performance gate to integration tests
Ref https://github.com/pulumi/pulumi/issues/15347
I ran the tests 5 times in CI to gather times:
TestPerfEmptyUpdate
5.06 5.11 4.82 4.95 5.74
TestPerfManyComponentUpdate
16.7 17.1 16.62 16.28 16.29
TestPerfParentChainUpdate
17.58 17.58 17.46 17.23 17.34
This shows fairly consistent times. I took the slower run for each test and added 10% and rounded up to set the thresholds. Very scientific 🧑🔬
Maybe we should run these in a separate job, and sequentially, to avoid too much noise and contention with other tests.
Running with NoParallel doesn't seem to be enough to make these pass :/
I'm a little curious about the choice of Python for these performance tests. Are we planning to add more for NodeJS/Go? Or are we just testing the engine here (in which case Go might be better since we don't need to spin up as much language runtime?)
Sorry, I missed this. There's no real reason for Python over other languages, it's just what I picked for to create the examples.
I think it could make sense to follow up with a test for each language. I suspect the most useful thing would be to come up with more complex scenarios in the programs, cases where things should be fast because they happen in parallel, but that would slow down if we started processing something serially.