higress
higress copied to clipboard
[HEP] Improve the e2e testing for plugins like AI Proxy/AI Cache that depend on LLM
Why do you need it?
Because running model services in an e2e environment or accessing the LLM provider's API is not very feasible. The e2e testing of plugins for the related logic was not thorough before, which would lead to defects in subsequent iterations.
How to implement it?
- [ ] #1629
- [x] #1630
- [ ] #1631
- [ ] #1632
- [ ] #1633
- [ ] #1634
- [ ] #1635
- [ ] #1636