SimSwap
SimSwap copied to clipboard
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled.
(simswap) C:\Users\foldd\Desktop\SimSwap>python test_video_swapmulti.py --crop_size 224 --use_mask --name people --Arc_path arcface_model/arcface_checkpoint.tar --pic_a_path lo.png --video_path pt.mp4 --output_path ./output/multi_test_swapmulti-pt.mp4 --temp_path ./temp
------------ Options -------------
Arc_path: arcface_model/arcface_checkpoint.tar
aspect_ratio: 1.0
batchSize: 8
checkpoints_dir: ./checkpoints
cluster_path: features_clustered_010.npy
crop_size: 224
data_type: 32
dataroot: ./datasets/cityscapes/
display_winsize: 512
engine: None
export_onnx: None
feat_num: 3
fineSize: 512
fp16: False
gpu_ids: [0]
how_many: 50
id_thres: 0.03
image_size: 224
input_nc: 3
instance_feat: False
isTrain: False
label_feat: False
label_nc: 0
latent_size: 512
loadSize: 1024
load_features: False
local_rank: 0
max_dataset_size: inf
model: pix2pixHD
multisepcific_dir: ./demo_file/multispecific
nThreads: 2
n_blocks_global: 6
n_blocks_local: 3
n_clusters: 10
n_downsample_E: 4
n_downsample_global: 3
n_local_enhancers: 1
name: people
nef: 16
netG: global
ngf: 64
niter_fix_global: 0
no_flip: False
no_instance: False
no_simswaplogo: False
norm: batch
norm_G: spectralspadesyncbatch3x3
ntest: inf
onnx: None
output_nc: 3
output_path: ./output/multi_test_swapmulti-pt.mp4
phase: test
pic_a_path: lo.png
pic_b_path: ./crop_224/zrf.jpg
pic_specific_path: ./crop_224/zrf.jpg
resize_or_crop: scale_width
results_dir: ./results/
semantic_nc: 3
serial_batches: False
temp_path: ./temp
tf_log: False
use_dropout: False
use_encoded_image: False
use_mask: True
verbose: False
video_path: pt.mp4
which_epoch: latest
-------------- End ----------------
Traceback (most recent call last):
File "test_video_swapmulti.py", line 58, in
Is there a question in that post somewhere ?
Sorry, I thought that this is not a forum to post questions. But if you ask, sure... how can I fix this error?
Thank you.
Ah ok.. Well the way I fixed it was , I use a conda environment btw. Uninstall onnxruntime-gpu pip install onnxruntime-gpu==1.9.0
i has something to do with the the latest version 1.12.0 and cuda version I think, I haven't been able to work it out.
Somehow that sorted the problem you have above.
However it made very little difference to processing speed on my k80, went form 1.2 it/s to 1,48 it/s
Try this one: https://github.com/mike9251/simswap-inference-pytorch It is faster than the official repository and supports the RTX3000 series for inference.
I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23
from: session = onnxruntime.InferenceSession(self.onnx_file, None)
to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
Ah ok.. Well the way I fixed it was , I use a conda environment btw. Uninstall onnxruntime-gpu pip install onnxruntime-gpu==1.9.0
i has something to do with the the latest version 1.12.0 and cuda version I think, I haven't been able to work it out.
Somehow that sorted the problem you have above.
However it made very little difference to processing speed on my k80, went form 1.2 it/s to 1,48 it/s
I have the same problem with my 10 series graphics card. The old version does need to be installed because the training code is not updated based on the new version.
I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23 from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
I can confirm that editing the following file works.
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 23
Before:
def get_model(self):
session = onnxruntime.InferenceSession(self.onnx_file, None)
After:
def get_model(self):
session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['AzureExecutionProvider', 'CPUExecutionProvider'])
Related:
- https://github.com/neuralchen/SimSwap/issues/445
I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23 from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
I can confirm that editing the following file works.
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 23
Before:
def get_model(self): session = onnxruntime.InferenceSession(self.onnx_file, None)
After:
def get_model(self): session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['AzureExecutionProvider', 'CPUExecutionProvider'])
Related:
Can you help me implement this fix for a hosted GPU runtime?