Retrieval-based-Voice-Conversion-WebUI
Retrieval-based-Voice-Conversion-WebUI copied to clipboard
AMD Support - Segmentation Fault
System:
Ubuntu 22.04 LTS
Intel i5-12600k
32GB DDR4
AMD Radeon RX 6650 XT
Manually installed PyTorch 2.0.1 for rocm. Then installed requirements from requirements.txt. webui boots up without problems but when trying inference or training I get the following message:
python infer-web.py
Use Language: en_US
Running on local URL: http://0.0.0.0:7865
loading weights/zro.pth
gin_channels: 256 self.spk_embed_dim: 109
<All keys matched successfully>
Segmentation fault (core dumped)
Can we expect AMD support in the near future?
AMD GPU have not been supported now. Maybe we could try to add the dml support later.
This is probably related to the export variable, "export HSA_OVERRIDE_GFX_VERSION=10.3.0" not being set because Navi 23 is not officially supported at the moment.
AMD GPU have not been supported now. Maybe we could try to add the
dmlsupport later.
That would be fantastic if you could!
I really wanted to experiment with it locally and I did try, but at most I can only use the Model Inference and do the first two steps of training, alternatively..? Is there a way to train it on the CPU, like with the other steps, if you weren't able to provide support for AMD?
Apologies for displaying my ignorance, I'm not a coder and I only dabbled with it 3 days ago, after seeing some impressive examples of what it was able to do.
Regardless though I wanted to say, you're all doing an incredible and keep up the amazing work! :-)
Have you tried installing pytorch2.1.0+rocm5.5?
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.5
To make sure I got this right, I uninstalled and reinstalled everything in the virtual env.
pip freeze | xargs pip uninstall -y
pip install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.5
# Workaround for bug #1109
cat requirements-dml.txt | xargs -I _ pip install "_"
python infer-web.py
I got this output:
/home/nato/.asdf/installs/python/3.10.13/lib/python3.10/site-packages/torch/cuda/__init__.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
2023-08-30 16:43:28 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-08-30 16:43:28 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support.
No supported Nvidia GPU found
use cpu instead
Use Language: en_US
Running on local URL: http://0.0.0.0:7865
OS: Pop!_OS 22.04 LTS x86_64
Host: MS-7D53 1.0
Kernel: 6.4.6-76060406-generic
CPU: AMD Ryzen 5 5600X (12) @ 3.700GHz
GPU: AMD ATI Radeon RX 6700 XT
Memory: 32007 MiB
So does RVC not support AMD :( was wondering why it wasn't exporting anymore. Shame :(
You can use amd cards on windows version if you install the directml / dml version.
That's not very useful. We want AMD support, not Windows vendor-locking...
@NatoBoram Have you tried pytorch2.1.0+rocm5.6? pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.6
To make sure I got this right, I uninstalled and reinstalled everything in the virtual env.
pip freeze | xargs pip uninstall -y
# Notice the version number change
pip install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.6
# Workaround for bug #1109
cat requirements-dml.txt | xargs -I _ pip install "_"
python infer-web.py
I got this output:
/home/nato/.asdf/installs/python/3.10.13/lib/python3.10/site-packages/torch/cuda/__init__.py:611: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
2023-09-07 12:11:48 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-09-07 12:11:48 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support.
2023-09-07 12:11:49 | INFO | configs.config | No supported Nvidia GPU found
2023-09-07 12:11:49 | INFO | configs.config | Use cpu instead
2023-09-07 12:11:49 | INFO | __main__ | Use Language: en_US
Running on local URL: http://0.0.0.0:7865
OS: Pop!_OS 22.04 LTS x86_64
Host: MS-7D53 1.0
Kernel: 6.4.6-76060406-generic
CPU: AMD Ryzen 5 5600X (12) @ 3.700GHz
GPU: AMD ATI Radeon RX 6700 XT
Memory: 32007 MiB
Running this on main, commit 569fcd8.
Don't mind the message it's only a display, internally it just attempts to match against known nvidia generations, and displays this if there's nothing it knows about.
Reference here: https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/72a18e66b6317e6c67fde52f00452e9abc271b88/infer-web.py#L72
It should work if you give it index 0 with 0 being your only cuda device available.
Can ROCm users tell me if training works for them ? It seemed to work initially, until I somehow managed to trigger a kernel panic. Now training is awfully slow
I made a pull request with some instructions on how to run RVC with ROCm.
Training is running with ~40 sec per epoch on a RX6700XT (12GB) with a batch size of 16.
Training was actually working very well for me. Then my PC froze and had to be forcibly shut off, and now training is very slow. Additionally, my graphics card appears to be using less power but is reporting full utilization. weird
Before
After
Also encountering a segfault on Ubuntu 22.04 while doing any operation with PTH files. ONNX files work just fine, but conversion doesn't, so there's no models available for use until this is fixed.
@Ecstatify Please keep me updated on this, I am experiencing the exact same issue. I already replaced the entire python runtime, but had no luck with it. Last step is going full nuclear and doing it on a fresh linux install.
During slow training, can you also see one python multiprocessing thread being maxed out ?
So, after much trial and error, I got this working, I will list the instructions below on how I got Training working on an AMD RX 6750 XT on Ubuntu Desktop 22.04.3 LTS as per lsb_release -a.
Some notes:
I downloaded the code directly, not a release version, it should work on a release version, but I downloaded the code, ran through the build instructions etc.
A big thanks to this person on Reddit who wrote the original base instructions I used and tweaked for RVC Orion_light on Reddit
ALSO, I haven't tested rmvpe or rmvpe_gpu as I forgot to get the pretrain's for them, but it should work, side note I believe rmvpe was having issues with audio longer than 3 minutes, at least I was.
Install Notes:
- Download and install Ubuntu Desktop 22.04.3 LTS from the official Ubuntu website.
- Once installed open a terminal window.
- Run these commands separately
sudo usermod -a -G render YourUsernameHere&sudo usermod -a -G video YourUsernameHere. This is adding yourself to both the Render & Video Groups.. - Install Python3 with this command (may already be preinstalled)
sudo apt-get install python3. - Open Bashrc with Nano with this command
nano ~/.bashrc, then at the bottom of that file addalias python=python3 export HSA_OVERRIDE_GFX_VERSION=10.3.0. Make sure that thealias python=python3&export HSA_OVERRIDE_GFX_VERSION=10.3.0are on DIFFERENT lines. - Reboot (Important).
After booting back into Ubuntu, we will install ROCm and Pytorch.
- Go to the PyTorch | Start Locally website and check the ROCm version (currently 5.4.2).
- Go to this website How to Install ROCm (amd.com). select version that is compatible with pytorch and find command for you installed ubuntu version. Example that works right now with RVC V2
sudo apt-get update wget https://repo.radeon.com/amdgpu-install/5.4.2/ubuntu/jammy/amdgpu-install_5.4.50402-1_all.deb sudo apt-get install ./amdgpu-install_5.4.50402-1_all.deb. - Then run this command
sudo amdgpu-install --usecase=rocm --no-dkmsand let that finish. - Reboot (Important).
- Open Start Locally | PyTorch again, select Stable, Linux, pip, python, and ROCm. and run the command outputted below that in your terminal. Example
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2. (You may need to install pip, with Ubuntu it will just let you know that pip is missing and you can get it by running something likesudo apt-get install pip. That pip install command could be wrong, so double check. - Reboot (Important).
Next we will build RVC V2 from source, pretty self explanatory via the official docs, but will retype them here as there is some extra stuff with AMD on Linux
- Download the source code either by clicking Code then Download ZIP or
git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI.git - Extract the ZIP file.
- CD (Change Directory) into the the extracted folder.
- Run this command
curl -sSL https://install.python-poetry.org | python3 -. - After that Finishes, run this command
poetry install. - After that, then install the projects AMD requirements via pip with this command
pip install -r requirements-amd.txt. - Then after that is done, run this command
sudo apt-get install rocm-hip-sdk rocm-opencl-sdk. - Don't know if this does anything important, but doing these steps made mine work, so run the command
export ROCM_PATH=/opt/rocmbut not the commandexport HSA_OVERRIDE_GFX_VERSION=10.3.0as we already added that to our Bashrc file. - After everything is done, run the command
python3 infer-web.pyto open the web interface to start using RVC!
Note about the interface: I had to use Harvest like instead of rmvpe or rmvpe_gpu becuase I forgot to download that model. Also for GPU indexes I put 0-1-2 just to be safe. And the biggest note your GPU WON'T show under GPU Information, it will say Unfortunately, there is no compatible GPU available to support your training., but when you go to train, open a new terminal window and run the command rocm-smi and it will tell you this info from left to right: GPU Index (I believe) Temp AvgPwr SCLK MCLK Fan Percentage Perf PwrCap VRAM% and GPU%. To tell if your AMD card is being used, check the Temp and the GPU%.
At a batch size of 16 and training 300 Epochs I'm using 99% of my GPU as indicated by GPU% and my temperature is around the low to mid 70s in celsius, I also do have some Coil Whine (Reference AMD GPU). It also takes about 30-40 seconds per Epoch.
I hope this helped someone trying to set this up and train with their AMD GPU on Linux!
Unfortunately, this didn't solve the problem, as this is basically how I
set it up in the first place, minus using the python-is-python3
metapackage instead of an alias, and using apt instead of apt-get,
because we are not in 2008 anymore and you may break things elsewhere on
your system by not doing these properly.
I've also just realized that these are the same instructions already provided, so they're already established to not work correctly for most.
3. Run these commands separately
sudo usermod -a -G render YourUsernameHere&sudo usermod -a -G video YourUsernameHere. This is adding yourself to both the Render & Video Groups..
Use $USER instead of YourUsernameHere
Install Python3 with this command (may already be preinstalled)
sudo apt-get install python3.
Use apt instead of apt-get
Open Bashrc with Nano with this command
nano ~/.bashrc, then at the bottom of that file addalias python=python3 export HSA_OVERRIDE_GFX_VERSION=10.3.0. Make sure that thealias python=python3&export HSA_OVERRIDE_GFX_VERSION=10.3.0are on DIFFERENT lines.
In Ubuntu's .bashrc, there's a dedicated space for aliases:
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
This means you can put your aliases in ~/.bash_aliases without polluting your ~/.bashrc.
3. Run these commands separately
sudo usermod -a -G render YourUsernameHere&sudo usermod -a -G video YourUsernameHere. This is adding yourself to both the Render & Video Groups..Use
$USERinstead ofYourUsernameHereInstall Python3 with this command (may already be preinstalled)
sudo apt-get install python3.Use
aptinstead ofapt-getOpen Bashrc with Nano with this command
nano ~/.bashrc, then at the bottom of that file addalias python=python3 export HSA_OVERRIDE_GFX_VERSION=10.3.0. Make sure that thealias python=python3&export HSA_OVERRIDE_GFX_VERSION=10.3.0are on DIFFERENT lines.In Ubuntu's
.bashrc, there's a dedicated space for aliases:# Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fiThis means you can put your aliases in
~/.bash_aliaseswithout polluting your~/.bashrc.
I wrote this very late at night, so excuse the issues with certain commands.
As stated in the original post, I wrote this in hopes it can help someone who wants a step by step guide as well as hopefully be able to help someone who is stuck.
Side note, it's still training at around 100-185 Watts depending on when you look at rocm-smi. The wattage did drop to around ~60 Watts, but a reboot fixed it and I think it was related to me changing the Power Plan or whatever it's called in Ubuntu from Balanced to Performance while training.
Also, I did have the UserWarning: Can't initialize NVML warnings.warn("Can't initialize NVML") error originally, after nuking my install, and reinstalling and doing the steps I wrote fixed that error for me, so maybe I got lucky. My GPU is the RX 6750 XT, and it's an AMD Reference model.
- Run these commands separately
sudo usermod -a -G render YourUsernameHere&sudo usermod -a -G video YourUsernameHere. This is adding yourself to both the Render & Video Groups..Use
$USERinstead ofYourUsernameHereInstall Python3 with this command (may already be preinstalled)
sudo apt-get install python3.Use
aptinstead ofapt-getOpen Bashrc with Nano with this command
nano ~/.bashrc, then at the bottom of that file addalias python=python3 export HSA_OVERRIDE_GFX_VERSION=10.3.0. Make sure that thealias python=python3&export HSA_OVERRIDE_GFX_VERSION=10.3.0are on DIFFERENT lines.In Ubuntu's
.bashrc, there's a dedicated space for aliases:# Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fiThis means you can put your aliases in
~/.bash_aliaseswithout polluting your~/.bashrc.I wrote this very late at night, so excuse the issues with certain commands.
As stated in the original post, I wrote this in hopes it can help someone who wants a step by step guide as well as hopefully be able to help someone who is stuck.
Side note, it's still training at around 100-185 Watts depending on when you look at
rocm-smi. The wattage did drop to around ~60 Watts, but a reboot fixed it and I think it was related to me changing the Power Plan or whatever it's called in Ubuntu from Balanced to Performance while training.Also, I did have the
UserWarning: Can't initialize NVML warnings.warn("Can't initialize NVML")error originally, after nuking my install, and reinstalling and doing the steps I wrote fixed that error for me, so maybe I got lucky. My GPU is the RX 6750 XT, and it's an AMD Reference model.
That's a completely different error to those most people are having (the program just dies outright due to a segfault) and the steps above are the same steps everyone else had beforehand, so the question is, why does this work for you and not us?
- Run these commands separately
sudo usermod -a -G render YourUsernameHere&sudo usermod -a -G video YourUsernameHere. This is adding yourself to both the Render & Video Groups..Use
$USERinstead ofYourUsernameHereInstall Python3 with this command (may already be preinstalled)
sudo apt-get install python3.Use
aptinstead ofapt-getOpen Bashrc with Nano with this command
nano ~/.bashrc, then at the bottom of that file addalias python=python3 export HSA_OVERRIDE_GFX_VERSION=10.3.0. Make sure that thealias python=python3&export HSA_OVERRIDE_GFX_VERSION=10.3.0are on DIFFERENT lines.In Ubuntu's
.bashrc, there's a dedicated space for aliases:# Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fiThis means you can put your aliases in
~/.bash_aliaseswithout polluting your~/.bashrc.I wrote this very late at night, so excuse the issues with certain commands.
As stated in the original post, I wrote this in hopes it can help someone who wants a step by step guide as well as hopefully be able to help someone who is stuck.
Side note, it's still training at around 100-185 Watts depending on when you look at
rocm-smi. The wattage did drop to around ~60 Watts, but a reboot fixed it and I think it was related to me changing the Power Plan or whatever it's called in Ubuntu from Balanced to Performance while training.Also, I did have the
UserWarning: Can't initialize NVML warnings.warn("Can't initialize NVML")error originally, after nuking my install, and reinstalling and doing the steps I wrote fixed that error for me, so maybe I got lucky. My GPU is the RX 6750 XT, and it's an AMD Reference model.That's a completely different error to those most people are having (the program just dies outright due to a segfault) and the steps above are the same steps everyone else had beforehand, so the question is, why does this work for you and not us?
That, I'm unsure about, I did just finish training a model tonight and have shut down my Ubuntu install multiple times. I'm willing to provide any info I can and what you guys might need.
Just let me know!
Try running with AMD_LOG_LEVEL=2 (logging can be toggled from levels 1 to 4, with 4 being the most verbose).
Edit : ParzivalWolfram, are you using the HSA_OVERRIDE_GFX_VERSION env variable ? It is required, for reasons I could get into the details of.
Try running with
AMD_LOG_LEVEL=2(logging can be toggled from levels 1 to 4, with 4 being the most verbose).Edit : ParzivalWolfram, are you using the HSA_OVERRIDE_GFX_VERSION env variable ? It is required, for reasons I could get into the details of.
Yes. AMD_LOG_LEVEL variable changed nothing, no additional output is given.
2023-09-26 09:41:09.222377: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-09-26 09:41:10 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-09-26 09:41:10 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support.
2023-09-26 09:41:11 | INFO | configs.config | DEBUG: torch.cuda.is_available(): True
2023-09-26 09:41:11 | INFO | configs.config | Found GPU AMD Radeon Graphics
2023-09-26 09:41:11 | INFO | __main__ | Use Language: en_US
Running on local URL: http://0.0.0.0:7865
2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Get sid: test-model.pth
2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Loading: assets/weights/test-model.pth
run-gui.sh: line 2: 54700 Segmentation fault (core dumped) AMD_LOG_LEVEL=2 HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 infer-web.py
Try running with
AMD_LOG_LEVEL=2(logging can be toggled from levels 1 to 4, with 4 being the most verbose). Edit : ParzivalWolfram, are you using the HSA_OVERRIDE_GFX_VERSION env variable ? It is required, for reasons I could get into the details of.Yes.
AMD_LOG_LEVELvariable changed nothing, no additional output is given.2023-09-26 09:41:09.222377: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09-26 09:41:10 | INFO | faiss.loader | Loading faiss with AVX2 support. 2023-09-26 09:41:10 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support. 2023-09-26 09:41:11 | INFO | configs.config | DEBUG: torch.cuda.is_available(): True 2023-09-26 09:41:11 | INFO | configs.config | Found GPU AMD Radeon Graphics 2023-09-26 09:41:11 | INFO | __main__ | Use Language: en_US Running on local URL: http://0.0.0.0:7865 2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Get sid: test-model.pth 2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Loading: assets/weights/test-model.pth run-gui.sh: line 2: 54700 Segmentation fault (core dumped) AMD_LOG_LEVEL=2 HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 infer-web.py
Can confirm, no additional output. Will post the normal output I get, however, interesting thing to note, when mine boots up it detects the exact model of GPU I have; see here 2023-09-26 23:35:41 | INFO | configs.config | Found GPU AMD Radeon RX 6750 XT as opposed to what @ParzivalWolfram has which is 2023-09-26 09:41:11 | INFO | configs.config | Found GPU AMD Radeon Graphics.
2023-09-26 23:35:38.214390: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-09-26 23:35:38.482033: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-09L-26 23:35:40 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-09-26 23:35:40 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support.
2023-09-26 23:35:41 | INFO | configs.config | Found GPU AMD Radeon RX 6750 XT
2023-09-26 23:35:41 | INFO | __main__ | Use Language: en_US
Running on local URL: http://0.0.0.0:7865```
Try running with
AMD_LOG_LEVEL=2(logging can be toggled from levels 1 to 4, with 4 being the most verbose). Edit : ParzivalWolfram, are you using the HSA_OVERRIDE_GFX_VERSION env variable ? It is required, for reasons I could get into the details of.Yes.
AMD_LOG_LEVELvariable changed nothing, no additional output is given.2023-09-26 09:41:09.222377: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09-26 09:41:10 | INFO | faiss.loader | Loading faiss with AVX2 support. 2023-09-26 09:41:10 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support. 2023-09-26 09:41:11 | INFO | configs.config | DEBUG: torch.cuda.is_available(): True 2023-09-26 09:41:11 | INFO | configs.config | Found GPU AMD Radeon Graphics 2023-09-26 09:41:11 | INFO | __main__ | Use Language: en_US Running on local URL: http://0.0.0.0:7865 2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Get sid: test-model.pth 2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Loading: assets/weights/test-model.pth run-gui.sh: line 2: 54700 Segmentation fault (core dumped) AMD_LOG_LEVEL=2 HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 infer-web.pyCan confirm, no additional output. Will post the normal output I get, however:
$ AMD_LOG_LEVEL=2 python3 infer-web.py 2023-09-26 23:35:38.214390: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variableTF_ENABLE_ONEDNN_OPTS=0. 2023-09-26 23:35:38.482033: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09L-26 23:35:40 | INFO | faiss.loader | Loading faiss with AVX2 support. 2023-09-26 23:35:40 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support. 2023-09-26 23:35:41 | INFO | configs.config | Found GPU AMD Radeon RX 6750 XT 2023-09-26 23:35:41 | INFO | __main__ | Use Language: en_US Running on local URL: http://0.0.0.0:7865Interesting thing to note, when mine boots up it detects the exact model of GPU I have; see here
2023-09-26 23:35:41 | INFO | configs.config | Found GPU AMD Radeon RX 6750 XTas opposed to what @ParzivalWolfram has which is2023-09-26 09:41:11 | INFO | configs.config | Found GPU AMD Radeon Graphics.
I have a 7800 XT, so the strings may not be updated in ROCm yet as it's pretty new. I also forgot that I added some debug output of my own while tracking down a different problem, so if you're wondering what the extra debug line at the top is, that was my doing.
Try running with
AMD_LOG_LEVEL=2(logging can be toggled from levels 1 to 4, with 4 being the most verbose). Edit : ParzivalWolfram, are you using the HSA_OVERRIDE_GFX_VERSION env variable ? It is required, for reasons I could get into the details of.Yes.
AMD_LOG_LEVELvariable changed nothing, no additional output is given.2023-09-26 09:41:09.222377: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09-26 09:41:10 | INFO | faiss.loader | Loading faiss with AVX2 support. 2023-09-26 09:41:10 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support. 2023-09-26 09:41:11 | INFO | configs.config | DEBUG: torch.cuda.is_available(): True 2023-09-26 09:41:11 | INFO | configs.config | Found GPU AMD Radeon Graphics 2023-09-26 09:41:11 | INFO | __main__ | Use Language: en_US Running on local URL: http://0.0.0.0:7865 2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Get sid: test-model.pth 2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Loading: assets/weights/test-model.pth run-gui.sh: line 2: 54700 Segmentation fault (core dumped) AMD_LOG_LEVEL=2 HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 infer-web.pyCan confirm, no additional output. Will post the normal output I get, however:
$ AMD_LOG_LEVEL=2 python3 infer-web.py 2023-09-26 23:35:38.214390: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variableTF_ENABLE_ONEDNN_OPTS=0. 2023-09-26 23:35:38.482033: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09L-26 23:35:40 | INFO | faiss.loader | Loading faiss with AVX2 support. 2023-09-26 23:35:40 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support. 2023-09-26 23:35:41 | INFO | configs.config | Found GPU AMD Radeon RX 6750 XT 2023-09-26 23:35:41 | INFO | __main__ | Use Language: en_US Running on local URL: http://0.0.0.0:7865Interesting thing to note, when mine boots up it detects the exact model of GPU I have; see here2023-09-26 23:35:41 | INFO | configs.config | Found GPU AMD Radeon RX 6750 XTas opposed to what @ParzivalWolfram has which is2023-09-26 09:41:11 | INFO | configs.config | Found GPU AMD Radeon Graphics.I have a 7800 XT, so the strings may not be updated in ROCm yet as it's pretty new. I also forgot that I added some debug output of my own while tracking down a different problem, so if you're wondering what the extra debug line at the top is, that was my doing.
Very interesting, I wonder if everyone having Seg Fault is on a newer AMD GPU (Ex. RX 7000 Series)?
EDIT: OP has a RX 6000 Series card, so can't be that
Try running with
AMD_LOG_LEVEL=2(logging can be toggled from levels 1 to 4, with 4 being the most verbose). Edit : ParzivalWolfram, are you using the HSA_OVERRIDE_GFX_VERSION env variable ? It is required, for reasons I could get into the details of.Yes.
AMD_LOG_LEVELvariable changed nothing, no additional output is given.2023-09-26 09:41:09.222377: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09-26 09:41:10 | INFO | faiss.loader | Loading faiss with AVX2 support. 2023-09-26 09:41:10 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support. 2023-09-26 09:41:11 | INFO | configs.config | DEBUG: torch.cuda.is_available(): True 2023-09-26 09:41:11 | INFO | configs.config | Found GPU AMD Radeon Graphics 2023-09-26 09:41:11 | INFO | __main__ | Use Language: en_US Running on local URL: http://0.0.0.0:7865 2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Get sid: test-model.pth 2023-09-26 09:41:21 | INFO | infer.modules.vc.modules | Loading: assets/weights/test-model.pth run-gui.sh: line 2: 54700 Segmentation fault (core dumped) AMD_LOG_LEVEL=2 HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 infer-web.pyCan confirm, no additional output. Will post the normal output I get, however:
$ AMD_LOG_LEVEL=2 python3 infer-web.py 2023-09-26 23:35:38.214390: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variableTF_ENABLE_ONEDNN_OPTS=0. 2023-09-26 23:35:38.482033: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09L-26 23:35:40 | INFO | faiss.loader | Loading faiss with AVX2 support. 2023-09-26 23:35:40 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support. 2023-09-26 23:35:41 | INFO | configs.config | Found GPU AMD Radeon RX 6750 XT 2023-09-26 23:35:41 | INFO | __main__ | Use Language: en_US Running on local URL: http://0.0.0.0:7865Interesting thing to note, when mine boots up it detects the exact model of GPU I have; see here2023-09-26 23:35:41 | INFO | configs.config | Found GPU AMD Radeon RX 6750 XTas opposed to what @ParzivalWolfram has which is2023-09-26 09:41:11 | INFO | configs.config | Found GPU AMD Radeon Graphics.I have a 7800 XT, so the strings may not be updated in ROCm yet as it's pretty new. I also forgot that I added some debug output of my own while tracking down a different problem, so if you're wondering what the extra debug line at the top is, that was my doing.
Very interesting, I wonder if everyone having Seg Fault is on a newer AMD GPU (Ex. RX 7000 Series)?
EDIT: OP has a RX 6000 Series card, so can't be that
Not all the 6000/7000 series cards are on the same underlying chipset. You'd have to check the chipset on something like TechPowerUp's GPU database. I'd guess that's pretty likely, since per dmesg, it's dying in AMD's HIP libraries in particular for me. I only just noticed the log there.
I'm gonna take a guess here, but you people might be using "outdated" ROCm installations.
Mind sharing the distribution and the rocm-device-libs package version on you guys 'sytems ?
I'm gonna take a guess here, but you people might be using "outdated" ROCm installations.
Mind sharing the distribution and the
rocm-device-libspackage version on you guys 'sytems ?
I'll share my working ROCm Device Libs tomorrow, but I did want to ask, after your Kernal Panic a few comments above, how did you fix your training speed? Mine seems to fluctuate a lot when training different models, referring to Wattage shown in rocm-smi
I'm gonna take a guess here, but you people might be using "outdated" ROCm installations.
Mind sharing the distribution and the
rocm-device-libspackage version on you guys 'sytems ?
$ cat /etc/os-release && apt list rocm-device-libs && pip3 list | grep torch
PRETTY_NAME="Ubuntu 22.04 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04 (Jammy Jellyfish)"
VERSION_CODENAME=jammy
UBUNTU_CODENAME=jammy
Listing... Done
rocm-device-libs/jammy,now 1.0.0.50600-67~22.04 amd64 [installed,automatic]
pytorch-triton-rocm 2.1.0+34f8189eae
torch 2.2.0.dev20230916+rocm5.6
torchaudio 2.2.0.dev20230916+rocm5.6
torchcrepe 0.0.20
torchvision 0.17.0.dev20230916+rocm5.6