Append ray head label selector in PodAutoscaler
Pull Request Description
It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same.
Related Issues
Resolves: part of https://github.com/vllm-project/aibrix/issues/758
Important: Before submitting, please complete the description above and review the checklist below.
Contribution Guidelines (Expand for Details)
We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:
Pull Request Title Format
Your PR title should start with one of these prefixes to indicate the nature of the change:
[Bug]: Corrections to existing functionality[CI]: Changes to build process or CI pipeline[Docs]: Updates or additions to documentation[API]: Modifications to aibrix's API or interface[CLI]: Changes or additions to the Command Line Interface[Misc]: For changes not covered above (use sparingly)
Note: For changes spanning multiple categories, use multiple prefixes in order of importance.
Submission Checklist
- [ ] PR title includes appropriate prefix(es)
- [ ] Changes are clearly explained in the PR description
- [ ] New and existing tests pass successfully
- [ ] Code adheres to project style and best practices
- [ ] Documentation updated to reflect changes (if applicable)
- [ ] Thorough testing completed, no regressions introduced
By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.