SimSwap icon indicating copy to clipboard operation
SimSwap copied to clipboard

Few Noob questions

Open Harmfulbrown25 opened this issue 2 years ago • 18 comments

Hi,

First off I'm very new to all of this so bare with me.

I haver an RTX 2080 8gb 64gb ram intel i990k

I rarely get over 3it/s is that normal for these specs?

2nd. everytime i try running pip install onnxruntime-gpu I can no longer use simswap and am met with this:

2022-01-28 07:26:40.8562616 [E:onnxruntime:Default, provider_bridge_ort.cc:940 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "The specified module could not be found." when trying to load "C:\Users\USER\anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" Traceback (most recent call last): File "test_video_swapsingle.py", line 58, in app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "F:\installx\SimSwap-main\SimSwap-main\insightface_func\face_detect_crop_single.py", line 40, in init model = model_zoo.get_model(onnx_file) File "C:\Users\USER\anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:\Users\USER\anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "C:\Users\USER\anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 324, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\USER\anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 369, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:516 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

To be able to use it again I've had to reinstall Anaconda 3.

3rd, how do you change the det-size and stuff to use the simswap 512?

Thanks all! conda

Harmfulbrown25 avatar Jan 31 '22 09:01 Harmfulbrown25

Definitely not normal. However, I believe I see the issue. You ideally want Pytorch/onnx/Cuda versions to be compatible. Step 1 Clone a backup of your environment, before proceeding any further. This can be done with:

conda create --name myclone --clone myenv

Myclone= is the name of your backup Myenv= name of your working environment.

Once you do this, you'll have peace of mind and you can build a proper environment from scratch. Once you get it working, you can delete the backup. Although I suggest you create a brand new environment.

Before you create a environment, here are some things you need to know. It is helpful that you install Pytorch with all it's dependencies. Pytorch will be bundled with a cudatoolkit, granted that the toolkit is compatible with your current Nvidia driver. For example, cuda 11.x requires a driver =/>470.
-If you are using windows, go into your Nvidia settings control panel and find your driver and cuda version.

  • For Linux,in the terminal type nvidia-smi

Your probable Cuda is either: 10.2,11.0,11.4,11.5 or 11.6 you'll know when you see it.

Next*** For this example, we will assume your Cuda is 11.4 ( this is not your Cuda toolkit version, so don't worry about that. This number pertains to what versions of cuda toolkits are compatible with your driver. For example, if you're using driver 430, you can't utilize a 10.2 toolkit so you might have to update your Nvidia drivers ) Create a new environment with python= 3.6

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

Install the other packages as is from the preparation, minus the onnxruntime.

(option): pip install --ignore-installed imageio

pip install insightface==0.2.1 moviepy

**** pip install onnxruntime-gpu***** -For Onnxruntime-gpu, here's what you need to know: Do not have onnxruntime-gpu and onnxruntime both installed in your environment ( you do according to your package list) you must use either or to avoid issues and confusion.

This is where your Cuda version that you gathered from your settings, comes into play:

https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

On this page, you will see a chart with dependencies according to Cuda version. We will assume still that your Cuda is 11.4

Your onnxruntime-gpu version would be 1.9 or 1.10. I went with 1.9 for this example:

pip install onnxruntime-gpu==1.9

You still have a few more steps, but almost done.

8.2.2.26 is the Cudnn version

And the lib64 files you should have are:

libcudart 11.4.43 libcufft 10.5.2.100 libcurand 10.2.5.120 libcublasLt 11.6.1.51 libcublas 11.6.1.51 libcudnn 8.2.4

Install a few more things [ conda install nvidia-cublas-cu114]

[ conda install nvidia-cudnn-cu114]

[ conda install nvidia-cuda-runtime-cu114]

You might have to pip install these, I can't recall.

So now exit your environment, restart your computer and get back to your environment.

Run the script testvideoswapsingle.py

If you get a message saying that you must provide cuda execution provider, let me know it's a simple fix.

For your video, make sure it is no more than 1280x720

And use -224 for crop_size.

Tell me what your results are and then we can proceed further. I know it seems like a lot but all of this should take you no more than 15 minutes. Good luck!

Fibonacci134 avatar Jan 31 '22 12:01 Fibonacci134

Hi mate,

Thank you for your help!

So, my driver version is 511.23 My CUDA version is 11.6.

I've managed to install onnxruntime-gpu 1.10, it says it intsalled correctly however when i check my package list non of those lib64 files are present.

I struggled with getting the nvidia things to install but I believe I mananaged it (I followed this site https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)

When running the testvideoswapsingle.py I get

'testvideoswapsingle.py' is not recognized as an internal or external command, operable program or batch file.

Thank you for your help again, I understand it's probably frustrating trying to explain what to do to someone that has no idea lol packages .

Harmfulbrown25 avatar Feb 01 '22 00:02 Harmfulbrown25

Hey, just an update on those lib64 files I'm missing.

I've realised I can find them and the codes needed to get them here https://anaconda.org/nvidia/repo

So disregard that, I'll post again once they're all done.

Cheers

Update: All seemed to work except libcudart and libcublasLt as they arent in the repo.

Here are my packages

blas 1.0 mkl certifi 2020.6.20 py36_0 anaconda charset-normalizer 2.0.11 pypi_0 pypi colorama 0.4.4 pypi_0 pypi cuda-cudart 11.4.148 h7554279_0 nvidia/label/cuda-11.4.3 cudatoolkit 11.3.1 h59b6b97_2 cudnn 8.2.1 cuda11.3_0 cycler 0.11.0 pypi_0 pypi dataclasses 0.8 pyh4f3eec9_6 decorator 4.4.2 pypi_0 pypi easydict 1.9 pypi_0 pypi flatbuffers 2.0 pypi_0 pypi freetype 2.10.4 hd328e21_0 idna 3.3 pypi_0 pypi imageio 2.14.1 pypi_0 pypi imageio-ffmpeg 0.4.5 pypi_0 pypi insightface 0.2.1 pypi_0 pypi intel-openmp 2022.0.0 haa95532_3663 joblib 1.1.0 pypi_0 pypi jpeg 9b hb83a4c4_2 kiwisolver 1.3.1 pypi_0 pypi libcublas 11.8.1.74 h62d394a_0 nvidia libcufft 10.7.0.55 hfce90f6_0 nvidia libcurand 10.2.9.55 h4e24775_0 nvidia libpng 1.6.37 h2a8f88b_0 libtiff 4.2.0 hd0e1b90_0 libuv 1.40.0 he774522_0 lz4-c 1.9.3 h2bbff1b_1 matplotlib 3.3.4 pypi_0 pypi mkl 2020.2 256 mkl-service 2.3.0 py36h196d8e1_0 mkl_fft 1.3.0 py36h46781fe_0 mkl_random 1.1.1 py36h47e9c7a_0 moviepy 1.0.3 pypi_0 pypi networkx 2.5.1 pypi_0 pypi ninja 1.10.2 h559b2a2_2 numpy 1.19.5 pypi_0 pypi numpy-base 1.19.2 py36ha3acd2a_0 nvidia-cublas-cu114 11.6.5.2 pypi_0 pypi nvidia-cuda-runtime-cu11 2021.12.20 pypi_0 pypi nvidia-cuda-runtime-cu114 11.4.148 pypi_0 pypi nvidia-cuda-runtime-cu116 11.6.55 pypi_0 pypi nvidia-pyindex 1.0.9 pypi_0 pypi olefile 0.46 py36_0 onnx 1.10.2 pypi_0 pypi onnxruntime-gpu 1.10.0 pypi_0 pypi opencv-python 4.5.5.62 pypi_0 pypi pillow 8.4.0 pypi_0 pypi pip 21.2.2 py36haa95532_0 proglog 0.1.9 pypi_0 pypi protobuf 3.19.4 pypi_0 pypi pyparsing 3.0.7 pypi_0 pypi python 3.6.13 h3758d61_0 python-dateutil 2.8.2 pypi_0 pypi pytorch 1.10.2 py3.6_cuda11.3_cudnn8_0 pytorch pytorch-mutex 1.0 cuda pytorch pywavelets 1.1.1 pypi_0 pypi requests 2.27.1 pypi_0 pypi scikit-image 0.17.2 pypi_0 pypi scikit-learn 0.24.2 pypi_0 pypi scipy 1.5.4 pypi_0 pypi setuptools 58.0.4 py36haa95532_0 six 1.16.0 pyhd3eb1b0_0 sqlite 3.37.0 h2bbff1b_0 threadpoolctl 3.1.0 pypi_0 pypi tifffile 2020.9.3 pypi_0 pypi tk 8.6.11 h2bbff1b_0 torchaudio 0.10.2 py36_cu113 pytorch torchvision 0.11.3 py36_cu113 pytorch tqdm 4.62.3 pypi_0 pypi typing_extensions 3.10.0.2 pyh06a4308_0 urllib3 1.26.8 pypi_0 pypi vc 14.2 h21ff451_1 vs2015_runtime 14.27.29016 h5e58377_2 wheel 0.37.1 pyhd3eb1b0_0 wincertstore 0.2 py36h7fe50ca_0 xz 5.2.5 h62dcd97_0 zlib 1.2.11 h8cc25b3_4 zstd 1.4.9 h19a0ad4_0

Harmfulbrown25 avatar Feb 01 '22 00:02 Harmfulbrown25

this is where im at now

Traceback (most recent call last): File "test_video_swapsingle.py", line 58, in app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "F:\installx\SimSwap-main\SimSwap-main\insightface_func\face_detect_crop_single.py", line 40, in init model = model_zoo.get_model(onnx_file) File "C:\Users\jacob\anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:\Users\jacob\anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "C:\Users\jacob\anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\jacob\anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 364, in _create_inference_session "onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers)) ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

Harmfulbrown25 avatar Feb 01 '22 02:02 Harmfulbrown25

Update:

I managed to get it working ( i found another comment from you about not using ORT 1.10 and instead using 1.9 and it fixed it) however Im still only sitting at 1.5-2.5it/s any pointers?

Harmfulbrown25 avatar Feb 01 '22 03:02 Harmfulbrown25

Hey bud, don't worry it's not frustrating. We all start off not knowing this stuff and just learn along the way, when we run into issues 😁. 2.5 it/s is unacceptable on your GPU bro, On a video with 1280x720 resolution using a Gtx1650 with 4Gb ram, I'm able to get a minimum of 6 it/s. I believe the issue here is the version of CUDA that is active within the environment. Which is causing the environment to be confused on where the actual library is. Inside your environment, run the command vncc --version

If this gives back 11.3, then that is the issue. Also, I was wondering.. In your Nvidia settings, do you have it set to performance mode and turned on clock boost? And make sure you have your computer set to maximum performance rather than "on demand" or battery saver. I will upload a link to my environment in a .yml file, you can create a new environment and tell anaconda to install the environment based on the yml file, which I believe is pretty well optimized, Al though it can be better lol (I'm still working on setting up TensorRt which should give a serious performance boost, it's just a pain to set up.) I have also made a couple of small changes to testvideoswapsingle.py basically set the det size to (256,256) which definitely speeds up inference. So maybe before anything else, you can definitely try that option. In order to change the det size and not get errors you need to go to the top of the python script where all the imports are and stick this somewhere in there

Import onnxruntime

And around line 61 Add the following:

Onnxruntime.set_default_logger_severity(3)

On line 63 change det_size from (640,640) to (256,256).

Let me know how that works out for you, I should be able to export my yml file later on today and post it.

Fibonacci134 avatar Feb 01 '22 12:02 Fibonacci134

Hey man thanks for the reply!

unfortunately, when I run the "vnxcc --version" command i get 'vncc' is not recognized as an internal or external command, operable program or batch file.

adding those lines definitely boosted the it/s to about 5-7 now :) thanks. Is there much of a difference between 640 and 256 in terms of quality?

Thanks for all your help

Update*

those rates were specific to that video :( I'm sitting at 3-4 in almost all others, at one point it jumped to 93 or something lol, no idea what happened

Harmfulbrown25 avatar Feb 01 '22 23:02 Harmfulbrown25

Hey glad it's a bit better, the quality is pretty much the same bud. With the 256 det size it actually makes more pictures iterable. Be sure that the input resolution is 1280x720 and the container is MP4 with a h.264 codec (most should be). There is definitely a bit more tuning to do, but I believe the video where you got the slow speed was probably 1920x1080. If we fine tune it a bit more you will get 5 it/s at least even with that. Just be sure to not use the 512 crop size, as it produces bad results generally. Also since you're using windows, to use the command "vncc --version" be sure that you're in the environment already as windows won't recognize that command on its own( unless in powershell)

--Lol the jump to 93 is when the face is out of frame and it's skipping the cropping and alignment

Fibonacci134 avatar Feb 02 '22 14:02 Fibonacci134

Hey bud, gonna post a yml file that should work for your setup by Saturday. In the mean time, just be sure that you have your GPU set to performance, as well as your cpu. And If you're using a laptop plug it in.

Fibonacci134 avatar Feb 03 '22 11:02 Fibonacci134

Awesome, that sounds great man!

Thank you for all of your help with this.

Harmfulbrown25 avatar Feb 04 '22 00:02 Harmfulbrown25

it's not vncc --version ! it's nvcc --version 🤦‍♂️

illtellyoulater avatar Feb 04 '22 22:02 illtellyoulater

Hey man!

I ran the verison thing and got this

nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Fri_Dec_17_18:28:54_Pacific_Standard_Time_2021 Cuda compilation tools, release 11.6, V11.6.55 Build cuda_11.6.r11.6/compiler.30794723_0

Cheers

Harmfulbrown25 avatar Feb 06 '22 22:02 Harmfulbrown25

Omg 😳 sorry bro, I'm using the mobile app and I'm borderline dyslexic. Yes it's definitely: " nvcc --version " to people who tried the "vncc" and wanted to tear your hair out, my sincere apologies. Update on the environment file. Gonna post today. Some packages did not work the same way between Linux and windows and due the windows just overall being a pain i didn't spend too much time optimizing the environment. Seeing VNC just brought back some horrible server management memories. Loll remote access without file transfer (unless you pay for corporate version). Good luck guys.

Fibonacci134 avatar Feb 08 '22 11:02 Fibonacci134

Hey dude,

Did you end up posting your file thing?

Harmfulbrown25 avatar Feb 28 '22 03:02 Harmfulbrown25

this is where im at now

Traceback (most recent call last): File "test_video_swapsingle.py", line 58, in app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "F:\installx\SimSwap-main\SimSwap-main\insightface_func\face_detect_crop_single.py", line 40, in init model = model_zoo.get_model(onnx_file) File "C:\Users\jacob\anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:\Users\jacob\anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "C:\Users\jacob\anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\jacob\anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 364, in _create_inference_session "onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers)) ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

I'm stuck on this step, how did you fix this?

seconddarko avatar Mar 27 '22 21:03 seconddarko

@seconddarko

This is how i fixed it man

I managed to get it working ( i found another comment from you about not using ORT 1.10 and instead using 1.9 and it fixed it)

Harmfulbrown25 avatar Mar 29 '22 02:03 Harmfulbrown25

@seconddarko

This is how i fixed it man

I managed to get it working ( i found another comment from you about not using ORT 1.10 and instead using 1.9 and it fixed it)

Dude im so sorry, been going through a couple of things as of late and haven't posted yet. Tomorrow more than likely. My bad bro

Fibonacci134 avatar Apr 18 '22 04:04 Fibonacci134

@Fibonacci134 Hey dude its boomstick.

I think someone deleted me from the discord, do you have an invite link?

LongjonSlim avatar Jul 03 '22 23:07 LongjonSlim