Cache OpenAPI responses for the repeated regression test to speed up tests and reduce costs
Hi, You project is an interesting idea. It would be good to have steps automatically cached and rerun from cache. If a step is failed, try running without cache and re-cache.
I made a fork implementing cache. It stores the steps performed by the AI and then reproduces them in each test. It may be of interest to you https://github.com/seyacat/auto-playwright
fork is perfect. what would even be cooler is if you could "compile" the tests.
- run tests locally (hits openai)
- cache tests locally (openai cache is saved into a local folder)
- commit cache to github
- when tests run in ci, they don't have to hit openai... everything is cached
I was just about to recommend this, and saw this request. @seyacat, great work. AI is only needed the first time it runs, ideally, and when it breaks. Huge cost savings. I honestly think that is why ZeroStep is so expensive, they are mostly just passing the cost of running the AI model on to the customer.
@seyacat, would it be better to issue a PR to merge your fork into this repo since this functionality can GREATLY reduce cost and increase speed? It seems like this repo is actively maintained.
I merge main in my fork, and add Deepseek support. works nice I think