RECCE
RECCE copied to clipboard
How to run on multiple GPUs?
Hi, it has been a long time since the code deployment, and I currently do not have a similar environment for this project. However, I think you may try the following command first. Please also refer to PyTorch Distributed Overview for multi-GPU training.
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port 12345 train.py --config path/to/config.yaml
Close due to inactivation. Please feel free to reopen this issue if you still have related problems.