SimSwap
SimSwap copied to clipboard
RTX 3000 issues
Card is RTX 3060 Ti. I was getting the "stuck at end" issue so I searched around and did this
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c nvidia -c conda-forge
and this
pip install torch==1.7.0+cu110 torchvision==0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html
And still I get this
Traceback (most recent call last):
File "test_video_swapsingle.py", line 58, in
I did run simswap fine on my 1050Ti a few months ago. I had an issue with some 79999 thing but found the answer on a youtube comment I think and it ran fine after that. Thanks everyone.
2080 ti here same issue. I got around (although maybe not correctly) by going into model_zoo.py line 23. Stack Trace tells you where yours is.
File "C:\Users\PC1.conda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None)
Once there change the line to look like this.
session = onnxruntime.InferenceSession(self.onnx_file, providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
That way you are explictly telling it what providers to use since the latest version of ONNX 1.9 works that way now. Alternativly untested you can get an earlier version of ONNX that doesn't work that way.
Note I had to also install TensorRT as well, directions were on ONNX's website as doing that alone led to another error. You may be able to get by with just the CUDAExecutionProvider. I get an INT64 weight warning seen herehttps://github.com/neuralchen/SimSwap/issues/173#issue-1074673726
Still works though.
Ok thanks. Modifying that line in "model_zoo.py" made it work. But it is only running on the CPU. GPU usage is at 1%. This is what I got now.
2021-12-11 11:13:35.8181531 [E:onnxruntime:Default, provider_bridge_ort.cc:995 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "The specified module could not be found." when trying to load "C:\Users\PC1.conda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" 2021-12-11 11:13:35.8183016 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 onnxruntime::python::CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met. input mean and std: 127.5 127.5 find model: ./insightface_func/models\antelope\glintr100.onnx recognition 2021-12-11 11:13:37.3988484 [E:onnxruntime:Default, provider_bridge_ort.cc:995 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "The specified module could not be found." when trying to load "C:\Users\PC1.conda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" 2021-12-11 11:13:37.3989775 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 onnxruntime::python::CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met. find model: ./insightface_func/models\antelope\scrfd_10g_bnkps.onnx detection set det-size: (640, 640) (142, 366, 4)
Yeah think you ran into my issue.
So I looked at their site and found this https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#install
And basically ended up downloading TensorRT 8.2 GA from Nvidia. https://developer.nvidia.com/nvidia-tensorrt-download
Once you install should be good to go. Basically without it you can't do TesnorRT. I think you can optionally exclude the TensorRT provider but I think with it you see like 4X the performance.
Sorry for the noob question but how in the world do I install the TensorRT? I downloaded
TensorRT-8.2.1.8.Windows10.x86_64.cuda-11.4.cudnn8.2.zip from https://developer.nvidia.com/nvidia-tensorrt-download
Do I need to extract this to a folder somewhere? Do I do it in Anaconda? In the simswap env?
Was I suppose to download something from here or is this just instructions?
https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#install
Thanks for all the help
Install guide here https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
I used the windows version and I copied the required Nvidia RT lib files into the bin folder of my CUDA 11.5 which is the same directory path as the Environment Variables PATH for CUDA I have setup.
Thanks. downloaded the Cuda Toolkit 11.5 exe. Installed it which upgrade from 9.1. I then extracted TensorRT-8.2.1.8.Windows10.x86_64.cuda-11.4.cudnn8.2.zip and put it in C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\bin Does the TensorRT have to be 11.5 also? I then ran python.exe -m pip install tensorrt-*-cp3x-none-win_amd64.whl in Anaconda. Successful install it said. Ran simswap and it went from about 17 minutes for the multiface example to about 14 minutes. Still gpu usage at 1% and occasionally goes to 5% for a couple seconds. So I then went to C:\ProgramData\Miniconda3\pkgs\cudatoolkit-11.1.1-heb2d755_9\Library\bin and installed Tensor RT there. Still nothing. I have used about 15-20GB of Disk space with random installers putting files only God knows where lol. It also says "The specified module could not be found." when trying to load "C:\ProgramData\Miniconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll". But the sill IS there. Still I get this.
2021-12-13 17:32:41.9097918 [E:onnxruntime:Default, provider_bridge_ort.cc:995 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "The specified module could not be found." when trying to load "C:\ProgramData\Miniconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" 2021-12-13 17:32:41.9102262 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 onnxruntime::python::CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met. input mean and std: 127.5 127.5 find model: ./insightface_func/models\antelope\glintr100.onnx recognition 2021-12-13 17:32:46.2065175 [E:onnxruntime:Default, provider_bridge_ort.cc:995 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "The specified module could not be found." when trying to load "C:\ProgramData\Miniconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" 2021-12-13 17:32:46.2066498 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 onnxruntime::python::CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met. find model: ./insightface_func/models\antelope\scrfd_10g_bnkps.onnx detection People who are actually utilizing RTX 3000 series cards, how long does it take to run the examples and how much GPU utilization are you getting? Thanks everyone.
I'm not sure about those errors and if they are specific to your GPU. But on my 2080ti GPU utilization is like 1 to 3% and I check with GPU-Z.
If you are getting not found errors maybe your path is incorrect and/or files are not where they should be.
I'm still learning as well and haven't yet looked into what it might take to drive GPU utilization up. My guess is this git repo is abandoned like all other Machine Learning repos motivated by academic research. They have either moved on to new research projects or are working hard to commercialize this.
About how long does it take for you to run the multiface example? Also can this run on a computer with intel integrated graphics and no gpu since it doesn't really use the gpu?
TensorRT
Install guide here https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
I used the windows version and I copied the required Nvidia RT lib files into the bin folder of my CUDA 11.5 which is the same directory path as the Environment Variables PATH for CUDA I have setup.
Hello, I downloaded tensorrt and copied lib to bin according to the official document, but I still reported an error. And I also tried to set the environment variable manually, but it was still useless and reported the same error.
Hello, I downloaded tensorrt and copied lib to bin according to the official document, but I still reported an error. And I also tried to set the environment variable manually, but it was still useless and reported the same error.
You have to change model_zoo.py, line 23, to
session = onnxruntime.InferenceSession(self.onnx_file, providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
Yes, with 3090, it works to change Line 23 in model_zoo.py to session = onnxruntime.InferenceSession(self.onnx_file, providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
Hello, I downloaded tensorrt and copied lib to bin according to the official document, but I still reported an error. And I also tried to set the environment variable manually, but it was still useless and reported the same error.
You have to change model_zoo.py, line 23, to
session = onnxruntime.InferenceSession(self.onnx_file, providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
Hello! Where locates model_zoo.py? Can't find. *sorry. File is there "C:\Users\username.conda\envs\simswap\Lib\site-packages\insightface\model_zoo"
Card is RTX 3060 Ti. I was getting the "stuck at end" issue so I searched around and did this
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c nvidia -c conda-forge
and this
pip install torch==1.7.0+cu110 torchvision==0.8.1+cu110 torchaudio===0.7.0 -f >https://download.pytorch.org/whl/torch_stable.html
pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html
And still I get this
Traceback (most recent call last): File "test_video_swapsingle.py", line 58, in app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "D:\Programs\SimSwapHQ\SimSwap-main\insightface_func\face_detect_crop_single.py", line 40, in init model = model_zoo.get_model(onnx_file) File "C:\Users\PC1.conda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:\Users\PC1.conda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "C:\Users\PC1.conda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in >init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\PC1.conda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 364, in >_create_inference_session "onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers)) ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since >ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, >onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], >...)
I did run simswap fine on my 1050Ti a few months ago. I had an issue with some 79999 thing but found the answer on a >youtube comment I think and it ran fine after that. Thanks everyone.
Hi, do you remember in which video? I have the same problem. I had errors installing pip install insightface==0.2.1 onnxruntime moviepy
so i did this. After cd file path of simswap
i wrote pip install opencv-python==4.3.0.36
and then pip install insightface==0.2.1 onnxruntime moviepy
.
But i was stuck on END so i followed your guide:
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c nvidia -c conda-forge
-->
pip install torch==1.7.0+cu110 torchvision==0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
-->
pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html
-->
File "C:\Users\PC1.conda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None)
Once there change the line to look like this.
session = onnxruntime.InferenceSession(self.onnx_file, providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
-->
But im not using gpu like you. I can try to install TensorRT (i didnt understand well how: I have Windows 11) but i'm worried it will not work because of 79999 thing that you mentioned. Maybe it's this file? C:\SimSwap\SimSwap-main\parsing_model\checkpoint 79999_iter.pth but what i must do with it? @nonlin @Echolink50
When i run simswap says:
2023-05-01 08:45:52.9573482 [E:onnxruntime:Default, provider_bridge_ort.cc:995 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "Impossible to find specific module." when trying to load "C:\Users*\anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" 2023-05-01 08:45:52.9576475 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 onnxruntime::python::CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met. input mean and std: 127.5 127.5 find model: ./insightface_func/models\antelope\glintr100.onnx recognition 2023-05-01 08:45:56.0082379 [E:onnxruntime:Default, provider_bridge_ort.cc:995 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "Impossible to find specific module." when trying to load "C:\Users*\anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" 2023-05-01 08:45:56.0083935 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 onnxruntime::python::CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met. find model: ./insightface_func/models\antelope\scrfd_10g_bnkps.onnx detection set det-size: (640, 640) (142, 366, 4) Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to C:\Users****/.cache\torch\hub\checkpoints\resnet18-5c106cde.pth 100%|█████████████████████████████████████████████████████████████████████████████| 44.7M/44.7M [00:05<00:00, 9.16MB/s] 4%|███▍ | 214/4957 [00:30<10:46, 7.33it/s]
The video is 2:45 minutes.