FastNN
FastNN copied to clipboard
FastNN provides distributed training examples that use EPL.
bert示例运行报错 OP_REQUIRES failed at nccl_communicator.cc:116 : Internal: unhandled system error,请问如何解决?
2023-09-27 09:21:54.582250: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:worker/replica:0/task:1/device:GPU:0 with 5211 MB memory) -> physical GPU (device: 0, name: Tesla V100-PCIE-16GB, pci bus id: 0000:0b:00.0, compute capability: 7.0) 2023-09-27 09:21:54.583242: I...
关于EPL的易用性、用户的模型编程接口和训练接口均基于TensorFlow。基于这点考虑,请问后续会出pytorch版本的吗,或者说tensorflow在这方面有什么优势,谢谢!
**环境:** nvcr.io/nvidia/tensorflow:21.12-tf1-py3镜像的容器 **代码:** FastNN/resnet/resnet_split.py **执行命令:** 服务器1:TF_CONFIG='{"cluster":{"worker":["172.20.21.181:55375","172.20.21.189:55376"]},"task":{"type":"worker","index":0}}' bash scripts/train_split.sh 服务器2:TF_CONFIG='{"cluster":{"worker":["172.20.21.181:55375","172.20.21.189:55376"]},"task":{"type":"worker","index":1}}' bash scripts/train_split.sh 服务器1的执行情况:  服务器2的执行情况:  可以看到服务器1的still waiting只打印了2条就不打印了说明已经接收到了服务器2的回复,但是没有继续往下运行。 **补充:** 同样的环境可以分布式运行bert,服务器之间是可以正常连接跑分布式训练的。 想问下是我的执行问题还是代码需要进行修改?