PR-323 Unit tests for toolkits.
Why
- Current tests workflow in the SDK covers all the functionality and basic tests to simulate the upload model and run model, but we needed an integrated unit tests for each toolkit for model upload.
How
- This pull request introduces a new workflow for maintainer-specific tests, adds support for a vLLM-based OpenAI-compatible server, and implements end-to-end testing for model uploads using dummy configurations. Key changes include the addition of a GitHub Actions workflow, a new model class for vLLM integration, configuration and dependency files, and comprehensive test cases.
- Workflow and Testing Enhancements: .github/workflows/maintainer_tests.yml: Added a new GitHub Actions workflow for maintainer-specific tests, triggered manually via workflow_dispatch. This workflow sets up Python 3.11, installs dependencies, and runs tests marked with the maintainer_approval marker.
- vLLM Model Integration: tests/runners/dummy_vllm_models/1/model.py: Introduced the VllmFacebookOpt125M model class, which integrates with a vLLM-based OpenAI-compatible server. The class includes methods to load the model and perform predictions. A utility function, vllm_openai_server, was added to start the server with configurable parameters.
- Configuration and Dependencies: tests/runners/dummy_vllm_models/config.yaml: Added a configuration file specifying model metadata, build information, and inference compute requirements, including Hugging Face checkpoints. tests/runners/dummy_vllm_models/requirements.txt: Added dependencies for the vLLM server and related libraries, including torch, transformers, and clarifai.
- End-to-End Testing: tests/runners/test_vllm_model_upload.py: Added end-to-end tests for the model upload flow. Tests include creating a temporary Clarifai app, validating configurations, uploading a model version, and cleaning up resources. Fixtures were added for reusable test setup.
Tests
- Tested locally with the changes.
I personally like the maintainer-approved label. Wonder what @luv-bansal @zeiler think about it.
not sure what you mean but @luv-bansal you're reviewing this one right?
not sure what you mean but @luv-bansal you're reviewing this one right?
@zeiler - We have introduced a term maintainer-approved from this PR. It mocks the model upload with vLLM toolkit. We wrote this tests to catch any errors due to SDK version.
Since this test is a heavy and takes some time to run, it was made to run only after adding the label maintainer-approved. This is done by some open-source repositories. So after initial commits and PR approval this label can be added by any one of member from clarifai-org and then it triggers this tests.
| Package | Line Rate | Health |
|---|---|---|
| clarifai | 43% | β |
| clarifai.cli | 42% | β |
| clarifai.cli.templates | 28% | β |
| clarifai.client | 69% | β |
| clarifai.client.auth | 66% | β |
| clarifai.constants | 100% | β |
| clarifai.datasets | 100% | β |
| clarifai.datasets.export | 80% | β |
| clarifai.datasets.upload | 75% | β |
| clarifai.datasets.upload.loaders | 37% | β |
| clarifai.models | 100% | β |
| clarifai.modules | 0% | β |
| clarifai.rag | 72% | β |
| clarifai.runners | 12% | β |
| clarifai.runners.models | 59% | β |
| clarifai.runners.pipeline_steps | 45% | β |
| clarifai.runners.pipelines | 85% | β |
| clarifai.runners.utils | 62% | β |
| clarifai.runners.utils.data_types | 72% | β |
| clarifai.schema | 100% | β |
| clarifai.urls | 60% | β |
| clarifai.utils | 56% | β |
| clarifai.utils.evaluation | 67% | β |
| clarifai.workflows | 95% | β |
| Summary | 62% (7347 / 11809) | β |
Minimum allowed line rate is 50%