Improve CI performance
Right now CI takes too long. With a deep queue or a single flake, it can effectively grind our productivity to a halt. We should look into what can be done to improve performance. A few potential ideas:
- [ ] Ask github for more runners
- [ ] Remove jobs that are low value
- [ ] More caching?
- [ ] Some other good idea
I think we already thrash the GitHub cache quite hard so if we did more caching we would need to ask GitHub for more storage.
I think the possible work to cut:
- downgrade clippy to check for platforms which aren't tier 1
- run fewer doctests (e.g. maybe skip them for abi3 on most platforms)
I do think reaching out to github to ask for increased runners may be valuable.
On Fri, Mar 29, 2024 at 2:50 PM David Hewitt @.***> wrote:
I think we already thrash the GitHub cache quite hard so if we did more caching we would need to ask GitHub for more storage.
I think the possible work to cut:
- downgrade clippy to check for platforms which aren't tier 1
- run fewer doctests (e.g. maybe skip them for abi3 on most platforms)
— Reply to this email directly, view it on GitHub https://github.com/PyO3/pyo3/issues/4017#issuecomment-2027602100, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAAGBHO7YJIWE724S226Y3Y2WSVBAVCNFSM6AAAAABFOWX26OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRXGYYDEMJQGA . You are receiving this because you authored the thread.Message ID: @.***>
-- All that is necessary for evil to succeed is for good people to do nothing.
Another question is whether GitHub merge queues support and whether we use something akin to bors' roll-up merges to better handle multiple PR landing simultaneously.
So what are our settings for "Build concurrency" and "Merge limits" as described in the docs? Do we build concurrently? Do use groups larger than one?
Screenshot of current settings: