cheyang
cheyang
@fengshunli Thank you for your suggestions. We have already integrated them into our continuous integration process, which has been helpful in addressing any code issues. We already have a significant...
you can use the following command for details. ``` kubectl inspect gpushare -d ```
Could you please check the output : ``` kubectl get po -n kube-system | grep gpushare ```
It doesn't depend on nvidia-docker2, it depends on nvidia-container-runtime which also work for containerd.
Could you check this? https://yq.aliyun.com/articles/655145?spm=a2c4e.11155435.0.0.72845622HF5QSG . Sorry, it's in Chinese. If you need English version, please let me know.
Thank you very much for your great contribution! It's an awesome work! We will test it as soon as possible!
I think it can work only when Kubernetes default scheduler can be configured.
I think 200MiB is not enough to run the tensorflow applicaiton.
Did you install kubectl-inspect-gpushare? You can check it with the cli.
> I hope that the two resources both aliyun.com/gpu-mem and nvidia.com/gpu can coexist in k8s system. > > Currently, pods using aliyun.com/gpu-mem resources and pods using nvidia.com/gpu resources are actually...