Retrieval-based-Voice-Conversion-WebUI icon indicating copy to clipboard operation
Retrieval-based-Voice-Conversion-WebUI copied to clipboard

How to run on RTX 50-series GPU

Open quasiblob opened this issue 7 months ago • 40 comments

Hi.

I remember trying RVC last year, but after upgrading GPU I can no longer use RVC.

I downloaded the latest zip-version, but that doesn't seem to be supporting Nvidia 50-series GPUs, there is a warning when I try to run go-web.bat.

However, I have no idea if the zip-version torch can be updated - there seems be conda and poetry related things going on inside the folder, and I barely know anything about those.

I've also tried installing from repo, everything goes OK, but I get this warning in console when I try to run go-web.bat. Seems like UI dropdowns are not updating like they should, so I can't get this one either working.

Any ideas?

EDIT - See my reply below, I got RVC WebUI mostly working now I think.

(venv) R:\AI_audio\Retrieval-based-Voice-Conversion-WebUI>go-web.bat

(venv) R:\AI_audio\Retrieval-based-Voice-Conversion-WebUI>venv\Scripts\python.exe infer-web.py --pycmd venv\Scripts\python.exe --port 7897
2025-05-10 20:17:23 | INFO | configs.config | Found GPU NVIDIA GeForce RTX 5090
2025-05-10 20:17:23 | INFO | configs.config | Half-precision floating-point: True, device: cuda:0
R:\AI_audio\Retrieval-based-Voice-Conversion-WebUI\venv\lib\site-packages\gradio_client\documentation.py:106: UserWarning: Could not get documentation group for <class 'gradio.mix.Parallel'>: No known documentation group for module 'gradio.mix'
  warnings.warn(f"Could not get documentation group for {cls}: {exc}")
R:\AI_audio\Retrieval-based-Voice-Conversion-WebUI\venv\lib\site-packages\gradio_client\documentation.py:106: UserWarning: Could not get documentation group for <class 'gradio.mix.Series'>: No known documentation group for module 'gradio.mix'
  warnings.warn(f"Could not get documentation group for {cls}: {exc}")
2025-05-10 20:17:24 | INFO | __main__ | Use Language: en_US

quasiblob avatar May 10 '25 17:05 quasiblob

I'm also on a 50-series GPU. You can upgrade both pytorch and xformers as they've just released stable builds for cuda 12.8, but RVC seems to rely on torch-directml, which hasn't been updated to support pytorch 2.7, and there may be something else that I'm missing, too.

I upgraded xformers and torch by doing this inside the RVC directory:

runtime\python.exe -m pip uninstall torch torchvision torchaudio xformers
runtime\python.exe -m pip install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu128

In the meantime, you can run on CPU if you're not using the real-time GUI by opening configs\config.py and change if torch.cuda.is_available(): to if False:. I was still getting errors with my updated packages, though, so I had to switch them back to torch 2.0.0+cu118 , torch-directml 0.2.0.dev230426 and xformers 0.0.19.

If someone else out there knows how to get it running, please share.

plxl avatar May 11 '25 09:05 plxl

I'm also on a 50-series GPU. You can upgrade both pytorch and xformers as they've just released stable builds for cuda 12.8, but RVC seems to rely on torch-directml, which hasn't been updated to support pytorch 2.7, and there may be something else that I'm missing, too.

I upgraded xformers and torch by doing this inside the RVC directory:

runtime\python.exe -m pip uninstall torch torchvision torchaudio xformers
runtime\python.exe -m pip install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu128

In the meantime, you can run on CPU if you're not using the real-time GUI by opening configs\config.py and change if torch.cuda.is_available(): to if False:. I was still getting errors with my updated packages, though, so I had to switch them back to torch 2.0.0+cu118 , torch-directml 0.2.0.dev230426 and xformers 0.0.19.

If someone else out there knows how to get it running, please share.

me too I am facing the same.. I was able to train a new model but on CPU. Also I am facing another issue while converting I get an error timed out. which is very annoying

stazzz-ai avatar May 25 '25 17:05 stazzz-ai

You may try to run go-realtime-gui.bat and if you got something like:

AttributeError: 'RVC' object has no attribute 'tgt_sr'

you can patch runtime/Lib/site-packages/fairseq/checkpoint_utils.py

search for any torch.load and add weights_only=False to bypass security policy in new version of PyTorch

haofanurusai avatar May 31 '25 02:05 haofanurusai

BTW I am using RTX 4080 so I am not sure with RTX 50 series

I just updated my PyTorch to 2.7.0 for better speed, then the problem occured, and after patching it got worked

Hope it will help

haofanurusai avatar May 31 '25 02:05 haofanurusai

@haofanurusai

Thanks, I found this and it fixes things partially. However, training wasn't working, I had to do some fixes, after these training starts and finishes, although UI shows 'Error' in bottom right corner.

I got these features working (not that thoroughly tested, just 5 minutes testing on each):

  • Inference
  • Training
  • Merging models (everything else too on this page seems to be working)
  • Onnx export ~~not working~~ (edit - works, but not with simplify)

I don't know about this onnx export, never used it, don't know how it should work. It does something for a while, progress bar fills, then I get "Something went wrong: connection errored out". I found two threads about this here in this repo, but no solution, unless I missed it. Edit - I found the culprit for this export onnx error, it is this line in the export.py:

model, _ = onnxsim.simplify(ExportedPath)

I bypassed it, but then you have to write some code around it, and I also had to change infer-web.py file's def export_onnx(ModelPath, ExportedPath) function, so that it returns something else than None, otherwise Gradio UI seems to get stuck.

Not sure what the proper fix would be, but at least the model exports now without errors, although the simplify operation is skipped.

quasiblob avatar May 31 '25 16:05 quasiblob

Seems like I managed to get things working.

This took several hours, so if someone else is interested - please try this out and let me know how it went.

I'm not a Python expert, and I haven't use RVC that much either, so let me know if the errors are fixable in some other way, without modifying package files (for example).


About

  • These steps are only for RTX 50 series GPUs
  • ❗This setup uses venv virtual environment
  • ❗Python version should be 3.10.x
  • Steps should work in listed order
  • I didn't test things too throughly in web UI
    • But at least I did this install setup 3 times (yet some steps may still be missing)

  • ✅ Things that seem to be working ✅:
    • Inference tab features
    • Training tab features
    • Checkpoint tab features
    • Onnx export (without simplify)

📜 RVC Webui install steps for RTX 50 series GPUs

  • Clone the repo into folder "RVC_webUI":

    git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI RVC_webUI

  • Go into created folder:

    cd RVC_webUI

  • Create venv virtual environment with Python 3.10 (note, I used venv as name):

    py -3.10 -m venv venv

  • Activate venv:

    call ./venv/Scripts/activate.bat

  • Update pip to version 24.0

    • I got errors otherwise I think
      • ERROR: No matching distribution found for typing-extensions>=4.10.0

    python.exe -m pip install pip==24.0

  • Install requirements:

    pip install -r requirements.txt

  • Install Pytorch in venv:

    • SM 12.0 architecture requires Pytorch 2.7 or newer...

    • Install the latest version (for me, versions were: torch 2.7.0+cu128, torchaudio 2.7.0+cu128):

    pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu128 --force-reinstall

  • Install missing requirements (needed for onnx export):

    pip install onnxsim

    pip install matplotlib==3.10.3 (not sure if this one is required)

  • Check you have onnx packages installed

    pip show onnx onnxruntime onnxruntime-gpu

    • my versions happened to be:

      • onnx (1.18.0)
      • onnxruntime (1.22.0)
      • onnxruntime-gpu (1.22.0)
    • Note you should probably have one onnx runtime, either cpu/gpu but not both

  • Verify you still have CUDA Pytorch versions installed at this point:

    pip show torch torchaudio

  • Verify you didn't end up with conflicts

    pip check

  • Store your current setup (if you want)

    pip freeze > install_packages.txt

  • Check pip's version

    pip --version

    If it is not not 24.0, install it:

    python.exe -m pip install pip==24.0

  • Check you ffmpeg in system path (it should print something)

    ffmpeg

    • If you don't have it, install it now and add it to system PATH
  • Download models required to run the app automatically (repo has a script for this):

    python tools/download_models.py

  • Modify go-web.bat:

    notepad go-web.bat

    • From:

    runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897

    • To:

    venv\scripts\python.exe infer-web.py --pycmd venv\scripts\python.exe --port 7897


  • At this point, you could run the app
    • But it will give errors, and will fail on some tasks
    • Some of these are caused by Pytorch 2.7+cu128 which we need for 50 series GPUs

⚠️First error - while inferencing:

  • Related to Pytorch 2.7+ and weights_only causing pickle.UnpicklingError

How to fix:

  • Open checkpoint_utils.py file:

notepad .\venv\Lib\site-packages\fairseq\checkpoint_utils.py

  • Go to line 315 and add the weights only parameter:

state = torch.load(f, map_location=torch.device("cpu"), weights_only=False)


⚠️Second error - while training:

  • \multiprocessing\process.py, AttributeError: 'FigureCanvasAgg' object has no attribute 'tostring_rgb'

How to fix:

  • Open utils.py file:

notepad .\infer\lib\train\utils.py

  • Go to line 238
    • Comment out the 2 lines starting with: "data = "
    • Replace them with:

data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape(fig.canvas.get_width_height()[::-1] + (3,))

  • Do the same on "plot_alignment_to_numpy" function below:

data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape(fig.canvas.get_width_height()[::-1] + (3,))

  • save and close the file

  • Then change the matplotlib version:

pip install matplotlib==3.9.0


⚠️Training should now work...

  • But UI will show "Error" while training is running in the terminal

⚠️Third error - Onnx export PermissionError

  • Modify infer-web.py

notepad infer-web.py

  • Find export_onnx function and change it to this:

      def export_onnx(ModelPath, ExportedPath):
          from infer.modules.onnx.export import export_onnx as eo
          result = eo(ModelPath, ExportedPath) 
          return result
    
  • Modify export.py

    notepad .\infer\modules\onnx

  • Modify this line:

    cpt = torch.load(ModelPath, map_location="cpu")

  • And change it to this:

    cpt = torch.load(ModelPath, map_location="cpu", weights_only=False)

  • Modify the last lines of export.py, they should be like this in the original file:

      model, _ = onnxsim.simplify(ExportedPath)
      onnx.save(model, ExportedPath)
      return "Finished
    
  • Change these lines into this:

      try:
          # model, _ = onnxsim.simplify(ExportedPath)
          # onnx.save(model, ExportedPath)
          print(f"export_onnx: onnxsim/onnx.save skipped/completed (if uncommented).")
      except NameError:
          print(f"export_onnx: onnxsim.simplify was skipped.")
      except Exception as e:
          print(f"export_onnx: An unexpected error occurred in try block: {e}")
    
      print(f"export_onnx: About to return 'Finished'.")
      return "Finished"
    

🎉 Done

  • Copy some voices to test things:

    • Check Hugging Face: https://huggingface.co/models?sort=trending&search=RVC
  • Place the files in the following folders:

    • Pth files folder (each .pth file can be in weights root folder):

    \assets\weights

    • Index files folder (folder with voice name for each index):

    \log\

Run the app

go-web.bat

quasiblob avatar Jun 01 '25 15:06 quasiblob

Guys author had pack this project for 50s GPU, he write down this in release note:

中国用户可使用以下2个源加速下载 1、无需登录,免费满速下载链接https://www.123pan.com/s/5tIqVv-QHNcv.html

if you open this link,you will find a txt file named RVC官方整合包直链免费满速下载地址.txt

open it, and you will find out inside: RVC20240604Nvidia.7z https://www.modelscope.cn/models/FlowerCry/rvc-windows-packages/resolve/master/RVC20240604Nvidia.7z

It works for me.

minwang1 avatar Jun 06 '25 17:06 minwang1

Sorry but no, of course speaking only on my behalf. Zips from random addresses... these ML thingies alone with zillion libraries give me headaches, let alone some random zips from who knows where.

quasiblob avatar Jun 06 '25 22:06 quasiblob

Sorry but no, of course speaking only on my behalf. Zips from random addresses... these ML thingies alone with zillion libraries give me headaches, let alone some random zips from who knows where.

Like I said, it's upload by author for chinese user because they can't access huggleface directly.If you do worry about his intention, OK, go check everything.

minwang1 avatar Jun 07 '25 01:06 minwang1

I don't worry about something being created by Chinese folks lol, simply the only reason is it is not available in Github or similar site I'm familiar with, and zips in general are not the way to go when sharing this kind software, but that is just my opinion.

quasiblob avatar Jun 07 '25 08:06 quasiblob

It is fair to be skeptical when the authors of this repository appear to be active and have actually released GPT-SoVITS on HuggingFace with supposed 50-series support. There is no reason to not release this updated RVC WebUI on HF, too.

Despite that, I compared the download linked by @minwang1 and the current Windows build available and there's a lot of differences as it has been updated in a variety of ways. That said, the packages and versions included are identical, so out the box it does not work on 50-series hardware.

However, after following the previous steps of upgrading the packages in this new folder (first upgrade pip) as well as updating that one line in Lib\site-packages\fairseq\checkpoint_utils.py to add weights_only=False, as per @quasiblob steps, I have both the Web UI and real-time GUI working. I haven't tried training, just basic audio conversion.

So I think we have a success?

plxl avatar Jun 07 '25 14:06 plxl

I made mistake,this is the link for 50s

https://www.modelscope.cn/models/FlowerCry/rvc-windows-packages/resolve/master/RVC20240604Nvidia50x0.7z

and this website is a chinese version of HF, https://www.modelscope.cn/models/FlowerCry/rvc-windows-packages/files

minwang1 avatar Jun 07 '25 14:06 minwang1

Ah, I see. Likely because of some security or privacy setting, ModelScope doesn't load for me in Firefox or even Chrome. It only seems to work in Safari - and even then I can only get around 800KB/s - so not ideal. Authors should definitely have uploaded this to HF and update the GitHub repository.

plxl avatar Jun 08 '25 03:06 plxl

Can we get the package on huggingface or github ? will try the steps laid out!

dirtmonster1337 avatar Jun 18 '25 12:06 dirtmonster1337

Seems like I managed to get things working.

This took several hours, so if someone else is interested - please try this out and let me know how it went.

I'm not a Python expert, and I haven't use RVC that much either, so let me know if the errors are fixable in some other way, without modifying package files (for example).

About

* ❗**These steps are only for RTX 50 series GPUs**

* ❗This setup uses venv virtual environment

* ❗Python version should be 3.10.x

* Steps should work in listed order

* I didn't test things too throughly in web UI
  
  * But at least I did this install setup 3 times (yet some steps may still be missing)



* ✅ Things that seem to be working ✅:
  
  * Inference tab features
  * Training tab features
  * Checkpoint tab features
  * Onnx export (without simplify)

📜 RVC Webui install steps for RTX 50 series GPUs

* Clone the repo into folder "RVC_webUI":
  `git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI RVC_webUI`

* Go into created folder:
  `cd RVC_webUI`

* Create venv virtual environment with Python 3.10 (note, I used venv as name):
  `py -3.10 -m venv venv`

* Activate venv:
  `call ./venv/Scripts/activate.bat`

* Update pip to version 24.0
  
  * I got errors otherwise I think
    
    * _ERROR: No matching distribution found for typing-extensions>=4.10.0_
  
  `python.exe -m pip install pip==24.0`

* Install requirements:
  `pip install -r requirements.txt`

* Install Pytorch in venv:
  
  * _SM 12.0 architecture requires Pytorch 2.7 or newer..._
  * Install the latest version (for me, versions were: torch 2.7.0+cu128, torchaudio 2.7.0+cu128):
  
  `pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu128 --force-reinstall`

* Install missing requirements (needed for onnx export):
  `pip install onnxsim`
  `pip install matplotlib==3.10.3` (not sure if this one is required)

* Check you have onnx packages installed
  `pip show onnx onnxruntime onnxruntime-gpu`
  
  * my versions happened to be:
    
    * onnx (1.18.0)
    * onnxruntime (1.22.0)
    * onnxruntime-gpu (1.22.0)
  * **Note** you should probably have one onnx runtime, either cpu/gpu but not both

* Verify you still have CUDA Pytorch versions installed at this point:
  `pip show torch torchaudio`

* Verify you didn't end up with conflicts
  `pip check`

* Store your current setup (if you want)
  `pip freeze > install_packages.txt`

* Check pip's version
  `pip --version`
  If it is not not 24.0, install it:
  `python.exe -m pip install pip==24.0`

* Check you ffmpeg in system path (it should print something)
  `ffmpeg`
  
  * If you don't have it, install it now and add it to system PATH

* Download models required to run the app automatically (repo has a script for this):
  `python tools/download_models.py`

* Modify go-web.bat:
  `notepad go-web.bat`
  
  * From:
  
  `runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897`
  
  * To:
  
  `venv\scripts\python.exe infer-web.py --pycmd venv\scripts\python.exe --port 7897`



* At this point, you could run the app
  
  * But it will give errors, and will fail on some tasks
  * Some of these are caused by Pytorch 2.7+cu128 which we need for 50 series GPUs

⚠️First error - while inferencing:

* Related to Pytorch 2.7+ and weights_only causing pickle.UnpicklingError

How to fix:

* Open checkpoint_utils.py file:

notepad .\venv\Lib\site-packages\fairseq\checkpoint_utils.py

* Go to line 315 and add the weights only parameter:

state = torch.load(f, map_location=torch.device("cpu"), weights_only=False)

⚠️Second error - while training:

* _\multiprocessing\process.py, AttributeError: 'FigureCanvasAgg' object has no attribute 'tostring_rgb'_

How to fix:

* Open utils.py file:

notepad .\infer\lib\train\utils.py

* Go to line 238
  
  * Comment out the 2 lines starting with: "data = "
  * Replace them with:

data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape(fig.canvas.get_width_height()[::-1] + (3,))

* Do the same on "plot_alignment_to_numpy" function below:

data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape(fig.canvas.get_width_height()[::-1] + (3,))

* save and close the file

* Then change the matplotlib version:

pip install matplotlib==3.9.0

⚠️Training should now work...

* But UI will show "Error" while training is running in the terminal

⚠️Third error - Onnx export PermissionError

* Modify infer-web.py

notepad infer-web.py

* Find export_onnx function and change it to this:
  ```
    def export_onnx(ModelPath, ExportedPath):
        from infer.modules.onnx.export import export_onnx as eo
        result = eo(ModelPath, ExportedPath) 
        return result
  ```

* Modify export.py
  `notepad .\infer\modules\onnx`

* Modify this line:
  `cpt = torch.load(ModelPath, map_location="cpu")`

* And change it to this:
  `cpt = torch.load(ModelPath, map_location="cpu", weights_only=False)`

* Modify the last lines of export.py, they should be like this in the original file:
  ```
    model, _ = onnxsim.simplify(ExportedPath)
    onnx.save(model, ExportedPath)
    return "Finished
  ```

* Change these lines into this:
  ```
    try:
        # model, _ = onnxsim.simplify(ExportedPath)
        # onnx.save(model, ExportedPath)
        print(f"export_onnx: onnxsim/onnx.save skipped/completed (if uncommented).")
    except NameError:
        print(f"export_onnx: onnxsim.simplify was skipped.")
    except Exception as e:
        print(f"export_onnx: An unexpected error occurred in try block: {e}")
  
    print(f"export_onnx: About to return 'Finished'.")
    return "Finished"
  ```

🎉 Done

* Copy some voices to test things:
  
  * Check Hugging Face: https://huggingface.co/models?sort=trending&search=RVC

* Place the files in the following folders:
  
  * Pth files folder (each .pth file can be in weights root folder):
  
  `\assets\weights`
  
  * Index files folder (folder with voice name for each index):
  
  `\log\`

Run the app

go-web.bat

Oh my. I actually got this to work with both the web UI and realtime GUI. I didn't clone anything though, I just used the latest NVIDIA release package and did the venv stuff in the same folder. Any missing libs not covered in your instructions, I just copied and pasted them into the venv.

I'm going to have a lot of fun with this on my 5090.

SwitchChan-Commando avatar Jun 27 '25 10:06 SwitchChan-Commando

Seems like I managed to get things working. This took several hours, so if someone else is interested - please try this out and let me know how it went. I'm not a Python expert, and I haven't use RVC that much either, so let me know if the errors are fixable in some other way, without modifying package files (for example). About

* ❗**These steps are only for RTX 50 series GPUs**

* ❗This setup uses venv virtual environment

* ❗Python version should be 3.10.x

* Steps should work in listed order

* I didn't test things too throughly in web UI
  
  * But at least I did this install setup 3 times (yet some steps may still be missing)



* ✅ Things that seem to be working ✅:
  
  * Inference tab features
  * Training tab features
  * Checkpoint tab features
  * Onnx export (without simplify)

📜 RVC Webui install steps for RTX 50 series GPUs

* Clone the repo into folder "RVC_webUI":
  `git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI RVC_webUI`

* Go into created folder:
  `cd RVC_webUI`

* Create venv virtual environment with Python 3.10 (note, I used venv as name):
  `py -3.10 -m venv venv`

* Activate venv:
  `call ./venv/Scripts/activate.bat`

* Update pip to version 24.0
  
  * I got errors otherwise I think
    
    * _ERROR: No matching distribution found for typing-extensions>=4.10.0_
  
  `python.exe -m pip install pip==24.0`

* Install requirements:
  `pip install -r requirements.txt`

* Install Pytorch in venv:
  
  * _SM 12.0 architecture requires Pytorch 2.7 or newer..._
  * Install the latest version (for me, versions were: torch 2.7.0+cu128, torchaudio 2.7.0+cu128):
  
  `pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu128 --force-reinstall`

* Install missing requirements (needed for onnx export):
  `pip install onnxsim`
  `pip install matplotlib==3.10.3` (not sure if this one is required)

* Check you have onnx packages installed
  `pip show onnx onnxruntime onnxruntime-gpu`
  
  * my versions happened to be:
    
    * onnx (1.18.0)
    * onnxruntime (1.22.0)
    * onnxruntime-gpu (1.22.0)
  * **Note** you should probably have one onnx runtime, either cpu/gpu but not both

* Verify you still have CUDA Pytorch versions installed at this point:
  `pip show torch torchaudio`

* Verify you didn't end up with conflicts
  `pip check`

* Store your current setup (if you want)
  `pip freeze > install_packages.txt`

* Check pip's version
  `pip --version`
  If it is not not 24.0, install it:
  `python.exe -m pip install pip==24.0`

* Check you ffmpeg in system path (it should print something)
  `ffmpeg`
  
  * If you don't have it, install it now and add it to system PATH

* Download models required to run the app automatically (repo has a script for this):
  `python tools/download_models.py`

* Modify go-web.bat:
  `notepad go-web.bat`
  
  * From:
  
  `runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897`
  
  * To:
  
  `venv\scripts\python.exe infer-web.py --pycmd venv\scripts\python.exe --port 7897`



* At this point, you could run the app
  
  * But it will give errors, and will fail on some tasks
  * Some of these are caused by Pytorch 2.7+cu128 which we need for 50 series GPUs

⚠️First error - while inferencing:

* Related to Pytorch 2.7+ and weights_only causing pickle.UnpicklingError

How to fix:

* Open checkpoint_utils.py file:

notepad .\venv\Lib\site-packages\fairseq\checkpoint_utils.py

* Go to line 315 and add the weights only parameter:

state = torch.load(f, map_location=torch.device("cpu"), weights_only=False)

⚠️Second error - while training:

* _\multiprocessing\process.py, AttributeError: 'FigureCanvasAgg' object has no attribute 'tostring_rgb'_

How to fix:

* Open utils.py file:

notepad .\infer\lib\train\utils.py

* Go to line 238
  
  * Comment out the 2 lines starting with: "data = "
  * Replace them with:

data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape(fig.canvas.get_width_height()[::-1] + (3,))

* Do the same on "plot_alignment_to_numpy" function below:

data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape(fig.canvas.get_width_height()[::-1] + (3,))

* save and close the file

* Then change the matplotlib version:

pip install matplotlib==3.9.0

⚠️Training should now work...

* But UI will show "Error" while training is running in the terminal

⚠️Third error - Onnx export PermissionError

* Modify infer-web.py

notepad infer-web.py

* Find export_onnx function and change it to this:
def export_onnx(ModelPath, ExportedPath):
    from infer.modules.onnx.export import export_onnx as eo
    result = eo(ModelPath, ExportedPath) 
    return result

* Modify export.py
`notepad .\infer\modules\onnx`

* Modify this line:
`cpt = torch.load(ModelPath, map_location="cpu")`

* And change it to this:
`cpt = torch.load(ModelPath, map_location="cpu", weights_only=False)`

* Modify the last lines of export.py, they should be like this in the original file:
model, _ = onnxsim.simplify(ExportedPath)
onnx.save(model, ExportedPath)
return "Finished

* Change these lines into this:
try:
    # model, _ = onnxsim.simplify(ExportedPath)
    # onnx.save(model, ExportedPath)
    print(f"export_onnx: onnxsim/onnx.save skipped/completed (if uncommented).")
except NameError:
    print(f"export_onnx: onnxsim.simplify was skipped.")
except Exception as e:
    print(f"export_onnx: An unexpected error occurred in try block: {e}")

print(f"export_onnx: About to return 'Finished'.")
return "Finished"

🎉 Done

* Copy some voices to test things:
  
  * Check Hugging Face: https://huggingface.co/models?sort=trending&search=RVC

* Place the files in the following folders:
  
  * Pth files folder (each .pth file can be in weights root folder):
  
  `\assets\weights`
  
  * Index files folder (folder with voice name for each index):
  
  `\log\`

Run the app

go-web.bat

Oh my. I actually got this to work with both the web UI and realtime GUI. I didn't clone anything though, I just used the latest NVIDIA release package and did the venv stuff in the same folder. Any missing libs not covered in your instructions, I just copied and pasted them into the venv.

I'm going to have a lot of fun with this on my 5090.

I'm try this, but when i'm start Step 2 in model training i got error "System cannot find path" i think it about audio data path.

1castralis avatar Jul 13 '25 12:07 1castralis

Seems like I managed to get things working.

This took several hours, so if someone else is interested - please try this out and let me know how it went.

I'm not a Python expert, and I haven't use RVC that much either, so let me know if the errors are fixable in some other way, without modifying package files (for example).

About

  • These steps are only for RTX 50 series GPUs

  • ❗This setup uses venv virtual environment

  • ❗Python version should be 3.10.x

  • Steps should work in listed order

  • I didn't test things too throughly in web UI

    • But at least I did this install setup 3 times (yet some steps may still be missing)
  • ✅ Things that seem to be working ✅:

    • Inference tab features
    • Training tab features
    • Checkpoint tab features
    • Onnx export (without simplify)

📜 RVC Webui install steps for RTX 50 series GPUs

  • Clone the repo into folder "RVC_webUI": git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI RVC_webUI

  • Go into created folder: cd RVC_webUI

  • Create venv virtual environment with Python 3.10 (note, I used venv as name): py -3.10 -m venv venv

  • Activate venv: call ./venv/Scripts/activate.bat

  • Update pip to version 24.0

    • I got errors otherwise I think

      • ERROR: No matching distribution found for typing-extensions>=4.10.0

    python.exe -m pip install pip==24.0

  • Install requirements: pip install -r requirements.txt

  • Install Pytorch in venv:

    • SM 12.0 architecture requires Pytorch 2.7 or newer...
    • Install the latest version (for me, versions were: torch 2.7.0+cu128, torchaudio 2.7.0+cu128):

    pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu128 --force-reinstall

  • Install missing requirements (needed for onnx export): pip install onnxsim pip install matplotlib==3.10.3 (not sure if this one is required)

  • Check you have onnx packages installed pip show onnx onnxruntime onnxruntime-gpu

    • my versions happened to be:

      • onnx (1.18.0)
      • onnxruntime (1.22.0)
      • onnxruntime-gpu (1.22.0)
    • Note you should probably have one onnx runtime, either cpu/gpu but not both

  • Verify you still have CUDA Pytorch versions installed at this point: pip show torch torchaudio

  • Verify you didn't end up with conflicts pip check

  • Store your current setup (if you want) pip freeze > install_packages.txt

  • Check pip's version pip --version If it is not not 24.0, install it: python.exe -m pip install pip==24.0

  • Check you ffmpeg in system path (it should print something) ffmpeg

    • If you don't have it, install it now and add it to system PATH
  • Download models required to run the app automatically (repo has a script for this): python tools/download_models.py

  • Modify go-web.bat: notepad go-web.bat

    • From:

    runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897

    • To:

    venv\scripts\python.exe infer-web.py --pycmd venv\scripts\python.exe --port 7897

  • At this point, you could run the app

    • But it will give errors, and will fail on some tasks
    • Some of these are caused by Pytorch 2.7+cu128 which we need for 50 series GPUs

⚠️First error - while inferencing:

  • Related to Pytorch 2.7+ and weights_only causing pickle.UnpicklingError

How to fix:

  • Open checkpoint_utils.py file:

notepad .\venv\Lib\site-packages\fairseq\checkpoint_utils.py

  • Go to line 315 and add the weights only parameter:

state = torch.load(f, map_location=torch.device("cpu"), weights_only=False)

⚠️Second error - while training:

  • \multiprocessing\process.py, AttributeError: 'FigureCanvasAgg' object has no attribute 'tostring_rgb'

How to fix:

  • Open utils.py file:

notepad .\infer\lib\train\utils.py

  • Go to line 238

    • Comment out the 2 lines starting with: "data = "
    • Replace them with:

data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape(fig.canvas.get_width_height()[::-1] + (3,))

  • Do the same on "plot_alignment_to_numpy" function below:

data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape(fig.canvas.get_width_height()[::-1] + (3,))

  • save and close the file
  • Then change the matplotlib version:

pip install matplotlib==3.9.0

⚠️Training should now work...

  • But UI will show "Error" while training is running in the terminal

⚠️Third error - Onnx export PermissionError

  • Modify infer-web.py

notepad infer-web.py

  • Find export_onnx function and change it to this:
      def export_onnx(ModelPath, ExportedPath):
          from infer.modules.onnx.export import export_onnx as eo
          result = eo(ModelPath, ExportedPath) 
          return result
    
  • Modify export.py notepad .\infer\modules\onnx
  • Modify this line: cpt = torch.load(ModelPath, map_location="cpu")
  • And change it to this: cpt = torch.load(ModelPath, map_location="cpu", weights_only=False)
  • Modify the last lines of export.py, they should be like this in the original file:
      model, _ = onnxsim.simplify(ExportedPath)
      onnx.save(model, ExportedPath)
      return "Finished
    
  • Change these lines into this:
      try:
          # model, _ = onnxsim.simplify(ExportedPath)
          # onnx.save(model, ExportedPath)
          print(f"export_onnx: onnxsim/onnx.save skipped/completed (if uncommented).")
      except NameError:
          print(f"export_onnx: onnxsim.simplify was skipped.")
      except Exception as e:
          print(f"export_onnx: An unexpected error occurred in try block: {e}")
    
      print(f"export_onnx: About to return 'Finished'.")
      return "Finished"
    

🎉 Done

  • Copy some voices to test things:

    • Check Hugging Face: https://huggingface.co/models?sort=trending&search=RVC
  • Place the files in the following folders:

    • Pth files folder (each .pth file can be in weights root folder):

    \assets\weights

    • Index files folder (folder with voice name for each index):

    \log\

Run the app

go-web.bat

The new version that you linked to, the rmvpe model doesn't work sometimes. Will it work if I get an rtx 5090 instead of my 4080

2025-07-28 20:19:24 | INFO | infer.modules.vc.pipeline | Loading rmvpe model,assets/rmvpe/rmvpe.pt
2025-07-28 20:19:25 | WARNING | infer.modules.vc.modules | Traceback (most recent call last):
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\modules.py", line 188, in vc_single
audio_opt = self.pipeline.pipeline(
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\pipeline.py", line 354, in pipeline
pitch, pitchf = self.get_f0(
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\pipeline.py", line 154, in get_f0
f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
File "C:\RVC20240604Nvidia50x0\infer\lib\rmvpe.py", line 605, in infer_from_audio
hidden = self.mel2hidden(mel)
File "C:\RVC20240604Nvidia50x0\infer\lib\rmvpe.py", line 584, in mel2hidden
hidden = self.model(mel)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\infer\lib\rmvpe.py", line 410, in forward
x = self.fc(x)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\container.py", line 240, in forward
input = module(input)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\infer\lib\rmvpe.py", line 174, in forward
return self.gru(x)[0]
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\rnn.py", line 1393, in forward
result = _VF.gru(
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.

Traceback (most recent call last):
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 1007, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 953, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\components.py", line 2076, in postprocess
processing_utils.audio_to_file(sample_rate, data, file.name)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\processing_utils.py", line 206, in audio_to_file
data = convert_to_16_bit_wav(data)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\processing_utils.py", line 219, in convert_to_16_bit_wav
if data.dtype in [np.float64, np.float32, np.float16]:
AttributeError: 'NoneType' object has no attribute 'dtype'

Hayri-maker avatar Jul 28 '25 18:07 Hayri-maker

Sorry for my name but hayri was taken

Hayri-maker avatar Jul 28 '25 18:07 Hayri-maker

why can't you just copy your torch folder to rvc? Can the old version find torch if you installed it via pip and it's in the C:\Users\Hayri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0\LocalCache\local-packages\Python313\site-packages?

Hayri-maker avatar Jul 28 '25 18:07 Hayri-maker

What should i do the RMVPE is not working in the rvc for rtx 5090

Hayri-maker avatar Jul 29 '25 22:07 Hayri-maker

What should i do the RMVPE is not working in the rvc for rtx 5090 i meen torch 128

Hayri-maker avatar Aug 02 '25 11:08 Hayri-maker

what is this meen?i am using torch 2.7.1 the old torch118 is working 2025-07-28 16:20:00 | WARNING | infer.modules.vc.modules | Traceback (most recent call last):
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\modules.py", line 188, in vc_single
audio_opt = self.pipeline.pipeline(
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\pipeline.py", line 354, in pipeline
pitch, pitchf = self.get_f0(
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\pipeline.py", line 154, in get_f0
f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
File "C:\RVC20240604Nvidia50x0\infer\lib\rmvpe.py", line 605, in infer_from_audio
hidden = self.mel2hidden(mel)
File "C:\RVC20240604Nvidia50x0\infer\lib\rmvpe.py", line 584, in mel2hidden
hidden = self.model(mel)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\infer\lib\rmvpe.py", line 410, in forward
x = self.fc(x)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\container.py", line 240, in forward
input = module(input)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\infer\lib\rmvpe.py", line 174, in forward
return self.gru(x)[0]
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\nn\modules\rnn.py", line 1393, in forward
result = _VF.gru(
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.

Traceback (most recent call last):
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 1007, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 953, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\components.py", line 2076, in postprocess
processing_utils.audio_to_file(sample_rate, data, file.name)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\processing_utils.py", line 206, in audio_to_file
data = convert_to_16_bit_wav(data)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\processing_utils.py", line 219, in convert_to_16_bit_wav
if data.dtype in [np.float64, np.float32, np.float16]:
AttributeError: 'NoneType' object has no attribute 'dtype

Hayri-maker avatar Aug 12 '25 21:08 Hayri-maker

update update update?

dirtmonster1337 avatar Aug 15 '25 14:08 dirtmonster1337

when i use the latest pytorch in my rvc i get the following with the RMVPE: Traceback (most recent call last):
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 1006, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 847, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\modules.py", line 100, in get_vc
person = f'{os.getenv("weight_root")}/{sid}'
NameError: name 'os' is not defined
2025-08-28 16:41:55 | WARNING | infer.modules.vc.modules | Traceback (most recent call last):
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\modules.py", line 172, in vc_single
self.hubert_model = load_hubert(self.config)
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\utils.py", line 21, in load_hubert
models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
NameError: name 'checkpoint_utils' is not defined

Traceback (most recent call last):
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 1007, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 953, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\components.py", line 2076, in postprocess
processing_utils.audio_to_file(sample_rate, data, file.name)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\processing_utils.py", line 206, in audio_to_file
data = convert_to_16_bit_wav(data)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\processing_utils.py", line 219, in convert_to_16_bit_wav
if data.dtype in [np.float64, np.float32, np.float16]:
AttributeError: 'NoneType' object has no attribute 'dtype'
2025-08-28 16:43:15 | WARNING | infer.modules.vc.modules | Traceback (most recent call last):
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\modules.py", line 172, in vc_single
self.hubert_model = load_hubert(self.config)
File "C:\RVC20240604Nvidia50x0\infer\modules\vc\utils.py", line 21, in load_hubert
models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
NameError: name 'checkpoint_utils' is not defined

Traceback (most recent call last):
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 1007, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\blocks.py", line 953, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\components.py", line 2076, in postprocess
processing_utils.audio_to_file(sample_rate, data, file.name)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\processing_utils.py", line 206, in audio_to_file
data = convert_to_16_bit_wav(data)
File "C:\RVC20240604Nvidia50x0\runtime\lib\site-packages\gradio\processing_utils.py", line 219, in convert_to_16_bit_wav
if data.dtype in [np.float64, np.float32, np.float16]:
AttributeError: 'NoneType' object has no attribute 'dtype'

Hayri-maker avatar Aug 28 '25 14:08 Hayri-maker

it works in the applio. how can i upgrade gradio?

Hayri-maker avatar Aug 28 '25 14:08 Hayri-maker

i will code it myself cant believe it isnt updated!!!

dirtmonster1337 avatar Sep 02 '25 23:09 dirtmonster1337

I'm unsure if any of you have tried the newer version release with 50 series support on ModelScope, linked by @minwang1.

I understand it is a bit concerning to be downloading from there but I personally didn't find it to be doing anything out of the ordinary. And you should be careful with everything you download, anyway, even from here.

Run your own tests and determine if it is safe for you: https://www.modelscope.cn/models/FlowerCry/rvc-windows-packages/resolve/master/RVC20240604Nvidia50x0.7z

plxl avatar Sep 03 '25 07:09 plxl

I'm unsure if any of you have tried the newer version release with 50 series support on ModelScope, linked by @minwang1.

I understand it is a bit concerning to be downloading from there but I personally didn't find it to be doing anything out of the ordinary. And you should be careful with everything you download, anyway, even from here.

Run your own tests and determine if it is safe for you: https://www.modelscope.cn/models/FlowerCry/rvc-windows-packages/resolve/master/RVC20240604Nvidia50x0.7z Could you tri this file in the rtx 5090 version? do you have a rtx5090? https://www.dropbox.com/scl/fi/4m8q9xlkapvqurynr3zev/Nathuset-0004.mp3?rlkey=ks128zchg66r2nb7v9hcj1o73&dl=1

Hayri-maker avatar Sep 16 '25 15:09 Hayri-maker

how can i upgrade gradio?

Hayri-maker avatar Sep 16 '25 15:09 Hayri-maker

how can i upgrade gradio in rvc? 2025-09-16 17:52:25 | INFO | httpx | HTTP Request: GET https://api.gradio.app/gradio-messaging/en "HTTP/1.1 200 OK"
Traceback (most recent call last):
File "C:\a\infer-web.py", line 1515, in
app.queue(concurrency_count=511, max_size=1022).launch(
File "C:\a\runtime\lib\site-packages\gradio\blocks.py", line 2133, in queue
raise DeprecationWarning(
DeprecationWarning: concurrency_count has been deprecated. Set the concurrency_limit directly on event listeners e.g. btn.click(fn, ..., concurrency_limit=10) or gr.Interface(concurrency_limit=10). If necessary, the total number of workers can be configured via max_threads in launch().
2025-09-16 17:52:25 | INFO | httpx | HTTP Request: GET https://checkip.amazonaws.com/ "HTTP/1.1 200 "
2025-09-16 17:52:25 | INFO | httpx | HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"

C:\a>pause
Press any key to continue . . .

Hayri-maker avatar Sep 16 '25 15:09 Hayri-maker