RTX 50XX support?
Nihao!
When to expect RTX 50XX support? 5070 TI and 5080 are not working at this moment.
Thank you!
不兼容 ,我试了更新了torch都无法运行 放弃吧 !
We need official answer from the developer.
Despite the latest git update, RVC wasn't working for me, so I upgraded to torch-2.7.0.dev+cu128 to support my RTX 50XX GPU. While the GPU is now correctly recognized, the program fails with the following error: _pickle.UnpicklingError: Weights only load failed. It seems to be related to PyTorch 2.6+ changing torch.load() to default to weights_only=True, preventing RVC from loading the full model.
Is there a fix planned for this, or should we manually override this behavior in the code by setting torch.serialization.default_weights_only = False?
Despite the latest git update, RVC wasn't working for me, so I upgraded to
torch-2.7.0.dev+cu128to support my RTX 50XX GPU. While the GPU is now correctly recognized, the program fails with the following error: _pickle.UnpicklingError: Weights only load failed. It seems to be related to PyTorch 2.6+ changingtorch.load()to default toweights_only=True, preventing RVC from loading the full model.Is there a fix planned for this, or should we manually override this behavior in the code by setting
torch.serialization.default_weights_only = False? try wsl2,and you can refer to this https://github.com/RVC-Boss/GPT-SoVITS/issues/2026
has anyone had luck running this in wsl2 or docker?
I've changed the 314th line in checkpoint_utils.py into
with open(local_path, "rb") as f: state = torch.load(f, map_location=torch.device("cpu"), weights_only=False) # weights_only=False is for cuda 12.6+ It allows for loading checkpoints. Currently checking the training process with this fix.
Just to clarify, i did install pytorch for cuda 12.8.
Look like for training you also need to downgrade matplotlib. matplotlib==3.8.1 works for me so far.
will give this a shot
I've changed the 314th line in checkpoint_utils.py into
with open(local_path, "rb") as f: state = torch.load(f, map_location=torch.device("cpu"), weights_only=False) # weights_only=False is for cuda 12.6+ It allows for loading checkpoints. Currently checking the training process with this fix.
Just to clarify, i did install pytorch for cuda 12.8.
Look like for training you also need to downgrade matplotlib. matplotlib==3.8.1 works for me so far.
Thanks, I used this method to solve the problem.
If you don't want to modify the underlying fairseq code, you can also add a global registration before calling.
Add this at line 20 of infer/lib/rtrvc.py:
torch.serialization.add_safe_globals([fairseq.data.dictionary.Dictionary])
how would you downgrade matplotlib on the built in?