Use llm-d vLLM simulator for E2E tests
🚀 Feature Description and Motivation
The llm-d-inference-sim project is currently used to simulate an inference engine tests. As this project gains new features, it will become even easier for us to run tests. For our existing Mocked vLLM application, I believe it only needs to contain the necessary features for aibrix (e.g., Routing Strategy, etc.).
Use Case
We can directly use the image from this repository for our CI: https://github.com/llm-d/llm-d-inference-sim/releases. This will also reduce the amount of test data we need to maintain in our current mocked application.
Proposed Solution
@zhengkezhou1 this is a great idea. that's would be helpful for performance related testing
i will start this soon...
back to here : I'm curious whether the feature of llm-d-inference-sim are already suitable for our e2e tests. 🤔 For example, it seems that the metrics of llm-d-inference-sim are not complete. FYI: https://github.com/llm-d/llm-d-inference-sim/issues/191
No, this is a long-term goal. The mock app will only mock Aibrix feature behavior, and the VLLM feature should be provided by llm-d-inference-sim.