soolaugust
soolaugust
Finally I found the reason cause this: 1. Kubeadm disable read-only-port 10255 by default since 1.11, refet to [kubeadm: Improve the kubelet default configuration security-wise ](https://github.com/kubernetes/kubernetes/pull/64187), so cadvisor couldn't detect...
@cheyang I think there should be some tips about 1st problem in guide. because current guide about "Monitor GPUs of the training job" is not working in later 1.11 version...
/assign cheyang
I am not familiar about these two ways. is there any reference? I want to dig into it.
try following way: ```shell nvm install 18 nvm use 18 git clone https://github.com/cocktailpeanut/dalai cd dalai node setup.js mkdir -p ~/dalai/alpaca/models ./dalai alpaca install 7B ./dalai serve ```
https://github.com/soolaugust/chatgpt-v2ray 试下这个呢
https://github.com/soolaugust/chatgpt-v2ray 试下这个呢
vote for this issue, I am also curios about how to handle such situation if pod use more gpu mem than limit.
this is the cast of server logs: ```console 10:26:11.304 [milo-shared-thread-pool-4] DEBUG org.eclipse.milo.opcua.sdk.server.namespaces.OpcUaNamespace - Read value Running from attribute Value of NodeId{ns=0, id=2259} 10:26:11.433 [milo-shared-scheduled-executor-3] DEBUG org.eclipse.milo.opcua.sdk.server.subscriptions.Subscription - [id=1] lifetime counter...