Rui Wang
Rui Wang
Would be possible to unify the package name and Cuda-version have in a version like PT, for example, torch==1.11.0+cu113 instead of bagua-cuda113==0.9.0? https://github.com/PyTorchLightning/pytorch-lightning/pull/12723
torch native amp + apex amp
qadam algorithm occasionally failed in CI using `baguasys/bagua:master-pytorch-1.9.1-cuda11.1-cudnn8` image
We can add a dataloader/dataset wrapper, which caches item -> preprocessed results, to accelerate data loading. - [x] redis backend - [ ] rocksdb backend
The code is copied partially from #656
see this:https://github.com/Lightning-AI/lightning/pull/16225