bj-wangyang
bj-wangyang
Is there any update on this question? We recently encountered the same problem when packing GPU cards. Each node has 8 GPU cards, and most nodes have 4 GPU cards...
I think this is a very good feature, do you have time to review it? @jiangkaihua @william-wang
Refer to [SLA](https://github.com/volcano-sh/volcano/blob/master/docs/design/sla-plugin.md) and add annotation information to `sc2-1`, such as: `sla-waiting-time: 1s` Make sure the configmap contains at least the following plugins: ``` - name: proportion - name: sla...
Please rebase latest master and make ci happy. Thanks!
Thank you for participating in the community exchange, can you describe your application scenario in detail, and what else do you hope the community will do?
reviewed cc @jiangkaihua @william-wang
At present, e2e may fail to download the pytorch image, which leads to CI failure. Can it be triggered to run CI once in the background? cc @william-wang