[model-gateway] Refactor router e2e responses tests
Motivation
In e2e_response_api module, since we have too many non-test test class like StructuredOutputBaseTest, we have to spend extra effort on maintaining a list of those class manually so that pytest will skip them. And under the current backend based test structure, test_http_backend.py and test_grpc_backend.py will continue to inherit more and more test classes as we develop more features to test.
Modifications
- Remove backend based e2e response tests, modify the non-test test class like StructuredOutputBaseTest to be feature based tests
- Create backend fixture that can be passed to feature based tests to setup the backend
- Parameterize feature based tests with names of the backends
Test result on GPU (skipped xai backend)
Accuracy Tests
Benchmarking and Profiling
Checklist
- [x] Format your code according to the Format code with pre-commit.
- [x] Add unit tests according to the Run and add unit tests.
- [x] Update documentation according to Write documentations.
- [x] Provide accuracy and speed benchmark results according to Test the accuracy and Benchmark the speed.
- [x] Follow the SGLang code style guidance.
- [ ] Work with maintainers to merge your PR. See the PR Merge Process
Summary of Changes
Hello @XinyueZhang369, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly overhauls the end-to-end testing suite for the model-gateway's response API. The primary goal is to streamline the test architecture, moving away from monolithic, backend-specific test files that were difficult to manage and scale. By adopting a feature-based testing approach with pytest fixtures and parameterization, the changes aim to make the test suite more robust, easier to extend, and less prone to maintenance overhead.
Highlights
- Test Structure Refactoring: The e2e response tests have been refactored from a backend-centric structure to a feature-based organization, improving modularity and maintainability.
- Removal of Backend-Specific Test Files: Previous backend-specific test files like
test_grpc_backend.pyandtest_http_backend.pyhave been removed, eliminating the need for manual exclusion of non-test base classes. - Introduction of Pytest Fixtures and Parameterization: A new
setup_backendpytest fixture has been introduced to dynamically configure and launch different backends (OpenAI, XAI, gRPC, Harmony, Oracle Store) for feature tests, allowing tests to be parameterized across various backend environments. - New Feature-Based Test Modules: Dedicated test modules for specific features such as basic CRUD operations, function calling, state management, and structured output have been created under a new
featuresdirectory.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.
/tag-and-rerun-ci
lgtm, @key4ng please also take a look