Setepenre
Setepenre
Make pylint more inclusive of source directories
FIX ``` if not reserve: - self.release(trial) + self.release(trial, status='new') ```
### 🐛 Describe the bug ```python $ time python -c "import torch.cuda; print(torch.cuda.is_available())" True real 3m36.487s user 0m4.613s sys 0m34.452s ``` ### Versions ``` Collecting environment information... PyTorch version: 2.0.0+rocm5.4.2...
Intel Gaudi & GPU Max come with their own dist backend (hccl, ccl respectively). This patch enable those GPUs to be used in parallel to speed up training
The recipe [full_finetune_distributed](https://github.com/pytorch/torchtune/blob/v0.3.0/recipes/full_finetune_distributed.py) Appear to be much slower in v0.3 than [v0.2.1](https://github.com/pytorch/torchtune/blob/v0.2.1/recipes/full_finetune_distributed.py) Everything seems to work as usual, but my job that used to work in v0.2.1 time out in...
I install the packages using ``` FORCE_ONLY_CUDA=1 pip install -U -v --no-build-isolation git+https://github.com/rusty1s/pytorch_cluster.git FORCE_ONLY_CUDA=1 pip install -U -v --no-build-isolation git+https://github.com/rusty1s/pytorch_scatter.git FORCE_ONLY_CUDA=1 pip install -U -v --no-build-isolation git+https://github.com/rusty1s/pytorch_sparse.git ``` The installation...