Retrieval-based-Voice-Conversion-WebUI
Retrieval-based-Voice-Conversion-WebUI copied to clipboard
Support for Apple's MPS [?]
Will MPS be supported in the future and will it be possible to train models on Apple Silicon chips? 🙏
Thank you! 🙌
Now I'm doing-
cant wait!!!!!
hello any update on this?
hello any update on this?
Isn't it supported?
@Naozumi520 only for infer, not for training.
Train fixed bf1170012564463e9b9f13e7174b1eb499bdcbd2
Yep forgot to mentioned. They haven't released in the package. I do it by copying train_nsf_sim_cache_sid_load_pretrain.py
and train/process_ckpt.py
from repo to my folder and replace them.
@Naozumi520 can you train? it gives me an error. Retrieval-based-Voice-Conversion-WebUI-main/train/utils.py", line 206, in latest_checkpoint_path x = f_list[-1] IndexError: list index out of range
That's weird. Yes I'm able to train.
@Naozumi520 Have you been able to use your trained models to successfully infer? What version of RVC do you have?
Yes. I'm using the version updated0528v2
Eso es raro. Sí, puedo entrenar.
![]()
it is already working, but I thought that the training would be faster, because of the characteristics of my mac, but in reality it is slower than training online.
it is already working, but I thought that the training would be faster, because of the characteristics of my mac, but in reality it is slower than training online.
Yeah so do I! I was wondering was mps running on my graphic card or cpu. Cuz I'm intel cpu.
it is already working, but I thought that the training would be faster, because of the characteristics of my mac, but in reality it is slower than training online.
Yeah so do I! I was wondering was mps running on my graphic card or cpu. Cuz I'm intel cpu.
Any idea how to solve this?
I think one of the reasons is that some functions in Pytorch do not support mps. and I am rewriting the code and will let you know if I find anything.
I think one of the reasons is that some functions in Pytorch do not support mps. and I am rewriting the code and will let you know if I find anything.
ok thank you!!!
Yes! Like we have to set PYTORCH_ENABLE_MPS_FALLBACK
to 1 and it might actually use the cpu for some functions?
And I'm not sure is MPS really using my AMD graphic card? I put 0-1
in every options.
Sorry, AMD graphics cards are not supported.(from #272 )
Wait no I'm talking about on a Mac, which supported by MPS.
https://developer.apple.com/metal/pytorch/
Wait no I'm talking about on a Mac, which supported by MPS.
https://developer.apple.com/metal/pytorch/
did it work for you?
did it work for you?
Yep as I said before I'm able to do train and inference. However, the speed is slower than I thought as you mentioned.
https://developer.apple.com/metal/pytorch/
I was referring to this https://developer.apple.com/metal/pytorch/
I was referring to this https://developer.apple.com/metal/pytorch/
No and we don't have to follow this. PyTorch have builtin support for MPS already.
I was referring to this https://developer.apple.com/metal/pytorch/
No and we don't have to follow this. PyTorch have builtin support for MPS already.
if you have it, why is the training so slow? it took me 19 hours to train a 12 minute dataset with 300 epochs.
If you have it, why is the training so slow? it took me 19 hours to train a 12 minute dataset with 300 epochs.
If we have no MPS then we won't even be able to train the model. It's likely as @Tps-F mentioned, some functions in torch required cuda are not supported by MPS yet. Like I mentioned I was required to put the flag PYTORCH_ENABLE_MPS_FALLBACK
to 1
. This make those unsupported function use cpu as a fallback when MPS is not supported.
Is anyone training successfully on an Apple Silicone system AND utilizing the GPUs?
Is anyone training successfully on an Apple Silicone system AND utilizing the GPUs?
What device configuration do you have?
I have an M2 Max Mac Studio.
Apologies if I am behind on this issue, but it seems as though PyTorch supports MPS now, or is this not the case? Would you need to essentially "swap" this out for the currently implemented CUDA? (sorry if I sound a little ignorant on the topic as I am purely a hobbyist.
MPS BACKEND
mps device enables high-performance training on GPU for MacOS devices with Metal programming framework. It introduces a new device to map Machine Learning computational graphs and primitives on highly efficient Metal Performance Shaders Graph framework and tuned kernels provided by Metal Performance Shaders framework respectively.
The new MPS backend extends the PyTorch ecosystem and provides existing scripts capabilities to setup and run operations on GPU.
To get started, simply move your Tensor and Module to the mps device:
Check that MPS is available
if not torch.backends.mps.is_available(): if not torch.backends.mps.is_built(): print("MPS not available because the current PyTorch install was not " "built with MPS enabled.") else: print("MPS not available because the current MacOS version is not 12.3+ " "and/or you do not have an MPS-enabled device on this machine.")
else: mps_device = torch.device("mps")
# Create a Tensor directly on the mps device
x = torch.ones(5, device=mps_device)
# Or
x = torch.ones(5, device="mps")
# Any operation happens on the GPU
y = x * 2
# Move your model to mps just like any other device
model = YourFavoriteNet()
model.to(mps_device)
# Now every call runs on the GPU
pred = model(x)
PyTorchpartially supports mps. Unfortunately, not all operations can be done in mps.