Xiang Long

Results 12 comments of Xiang Long

> Run this code before adding the node. > > ```shell > cd ~/pai-deploy/kubespray/ > ansible-playbook -i ${HOME}/pai-deploy/cluster-cfg/hosts.yml docker-cache-config-distribute.yml --limit=new-worker-node || exit $? > ``` > > Where is the...

@Binyang2014 It seems when we add enable_docker_cache config there is no option to use change_node.py. And it will be more straight-forward to user that sync new node docker config when...

I have forked this repo and got this issue fixed. If anybody want use it, try brew install swordfith/pentest/dirb pls. And I proposed a pull request, hope sidaf can accept...

Would u mind paste your script? It seems not correctly using CUDA VISIBLE DEVICES for isolation

> It is our internal tool-kits and is adapted to many transformer based models.The script > > ``` > deepspeed --num_gpus 8 benchmark.py \ > -it \ > -t_data $TRAINDATA...

It appears that the markdown table is not being handled correctly.

数据量应该不会影响,之前 lora finetune 的脚本我们在 2080Ti 上测试过是可以 batch size 1 跑通的,方便的话可以贴一下 colab 的共享链接

![image](https://github.com/OpenBMB/MiniCPM/assets/18397468/e4cac034-e261-4ebf-922d-a14c5d7e9e79) 我们的实验结果⬆️,15g 可能是没开 offload > 我用2080ti 22g,能跑lora,wsl下跑,显存占用是15G,但是cuda暂时没能跑满。