New Problem! Can't use custom rvc models Anymore in colab! [ERROR mutable default use default_factory]
As the title suggests... No matter what hf token I use, No matter which pitch algorithm I use, No matter what custom rvc model I use... (I've used many different voice models) No matter if I'm just testing the rvc model with text... or if I'm generating a video dub with any custom rvc models... I get this same error message:
ERROR mutable default <class 'fairseq.dataclass.configs.CommonConfig'> for field common is not allowed: use default_factory
@R3gm @b4zz4 could you please fix this problem?
This is the log: env: YOUR_HF_TOKEN=[HIDDEN] /content/SoniTranslate [INFO] >> PIPER TTS enabled [INFO] >> Coqui XTTS enabled [INFO] >> In this app, by using Coqui TTS (text-to-speech), you acknowledge and agree to the license. You confirm that you have read, understood, and agreed to the Terms and Conditions specified at the following link: https://coqui.ai/cpml.txt . [INFO] >> Working in: cuda IMPORTANT: You are using gradio version 4.19.2, however version 4.44.1 is available, please upgrade.
Running on local URL: http://127.0.0.1:7860
Running on public URL: https://e622e6e232dbd11ad8.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces
)
[INFO] >> HuggingFace url
[INFO] >> Downloading: "https://huggingface.co/sail-rvc/yoimiya-jp/resolve/main/model.pth
" to /content/SoniTranslate/downloads/model.pth
100% 52.5M/52.5M [00:00<00:00, 202MB/s] [INFO] >> HuggingFace url [INFO] >> Downloading: "https://huggingface.co/sail-rvc/yoimiya-jp/resolve/main/model.index " to /content/SoniTranslate/downloads/model.index
100% 80.1M/80.1M [00:00<00:00, 213MB/s]
####################################
├── model.index
└── model.pth
####################################
[ERROR] >> Directory 'downloads/repo' does not exist.
[INFO] >> Content in 'test' removed.
[INFO] >> Config: Device is cuda:0, half precision is True
[INFO] >> Parallel workers: 1
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/gradio/route_utils.py", line 235, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/gradio/blocks.py", line 1627, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/gradio/blocks.py", line 1173, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/gradio/utils.py", line 690, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/content/SoniTranslate/voice_main.py", line 502, in make_test
self(
File "/content/SoniTranslate/voice_main.py", line 566, in call
self.hu_bert_model = load_hu_bert(self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/SoniTranslate/voice_main.py", line 120, in load_hu_bert
from fairseq import checkpoint_utils
File "/usr/local/lib/python3.11/dist-packages/fairseq/init.py", line 20, in
You have to go to Access token to https://huggingface.co/ and generate a token to download the models. The touch has to be in the variable export YOUR_HF_TOKEN=XXXXX
@b4zz4 I have already done that, the xxx is just an edit that was put there by me so my hf token isn't public!
I actually just generated a new one, it still showed the same error message. It states on hf the token was Last accessed 3 hours ago
I have subed to all 3 hf links including pyannote_3.1 No dice!
you have to replace XXXX with your hugginface-generated token you need to have an account there.
El sábado, 25 de enero de 2025, 09:56:45 p. m. ART, 03stevensmi @.***> escribió:
@b4zz4 I have already done that, the xxx is just an edit so the token isn't public
I actually just generated a new one, it still showed the same error message.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
you have to replace XXXX with your hugginface-generated token you need to have an account there.
El sábado, 25 de enero de 2025, 09:56:45 p. m. ART, 03stevensmi @.***> escribió:
@b4zz4 I have already done that, the xxx is just an edit so the token isn't public
I actually just generated a new one, it still showed the same error message.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
@b4zz4 again, I already have done exactly that! I have already have a huggingface account, I already accept the 3 license models, I generated the token, the hugingface token is active, And have inputed my huggingface token Here: YOUR_HF_TOKEN=>>THIS IS WHERE I PUT MY HF TOKEN<< (Not xxx)
Again, I didn't want my token to be seen publicly on this github page, that's why you saw it as xxx ! I actually used my REAL Huggingface token in the log I posted above! xxx was never actually what i used!
The problem is still there! It used to work, but now it's not!
I have set all repos to read access and everything! I have generated yet another hf token with all ticked and ifs STILL Not working! Same error message! Again, I want to say it used to work a month ago, And again, I followed everything and I still can't access custom rvc models.
Can someone please help me out on this? I can't be the only person with this issue! It was working fine last month
@b4zz4 @R3gm I've posted screenshots as proof that I have done everything, And to show this problem to help out a bit!
Please consider looking at the images!
: https://imgur.com/a/FfdIuWH
Could you please help me now, I really don't want this to be ignored as I have tried everything for the past 2 days To get this to work again!
@R3gm @b4zz4 is anyone going to bother even Trying To Help! I'm starting to get a bit sick of this now! Doesn't anyone know what the hell has happened to sonitranslate and why all of a sudden the rvc options no longer work!
Once again, please look at the pictures in the link above this post to show that i have done everything correctly as it was working fine a month ago... but not anymore!
I'm sorry if I'm coming across as rude or disrespectful, i really am trying here, but its been almost a week now taring my hair out trying to get someone to help look into this problem.
But No one has even said if it Is working for them, or not. So i have no idea if its just me, hugging face, Colab, A missing package, an update, a ban, a simple server down time... Nothing! And the fact that it was working fine not so long ago tells me that its defiantly not something that i'm doing wrong! so i cant be the only one with this issue! But the simple fact that no one has bothered mentioning if its working on google colab for them is really frustrating.
So i am really sorry if i'm coming across as a bit disrespectful... i just want some help.
Thanks for the notice I've made some adjustments to ensure compatibility with the latest Colab update. However, since Colab now runs Python 3.11, some functionality might not work as expected
Thanks man, really appreciate it. Would it be easier to include downgrading python in the colab as a temp fix?
Hey brosay,
I was trying to get a hold of you but couldn’t find your email—really hoping you don’t mind me hijacking this issue topic discussion 🫶.
How comfortable would you be with dockerizing this baby? There are plenty of similar AIO Python module fiascos on Docker Hub to reference.. though, of course, it's best to be hand picking some templates from credible sources (so many badly made, half-broken registries out there, man 😨). That said, I did find a few functional Docker registries with parallel Python apps running numerous modules that we could analyze and refine. Then, fine-tune using Deepseek R1, which is insanely powerful for solving roadblocks.
We could set up a runtime folder inside the root container, including a precompiled PyTorch with CUDA, Python for Debian/Arch Linux, and all necessary modules and dependencies. Not only would this be ideal for this setup, but it’d also give you a precompiled version of your project—similar to what Applio does. Blane’s been using this method for a while, and I’ve never run into issues with it. It’s blissful—no more Googling "how to install Python" smfh.
If I’m not mistaken, this entire app doesn’t rely on API calls, right? So really, all we need is a Dockerfile and docker-compose.yaml to define container/host ports, a database (if needed), etc. (I know, I tend to simplify things when they’re actually chaotic to implement 😅).
Let me know what you think! I can help tremendously.
I want to get this running on my unRAID OS 🙌
@R3gm i recomended to use fairseq2 it's alternative for old one
Or use another fixed of fairseq example: https://github.com/Bebra777228/TrainVocModel-EN/releases/download/fixed-packages/fairseq_fixed-0.13.0-cp311-cp311-linux_x86_6