SimSwap icon indicating copy to clipboard operation
SimSwap copied to clipboard

Total new to all this please help

Open LongjonSlim opened this issue 3 years ago • 3 comments
trafficstars

I followed the tutorial exactly however when i try to run it i get this

Traceback (most recent call last): File "test_video_swapsingle.py", line 58, in app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "E:\Anaconda\envs\simswap\SimSwap-main\insightface_func\face_detect_crop_single.py", line 40, in init model = model_zoo.get_model(onnx_file) File "E:\Anaconda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "E:\Anaconda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "E:\Anaconda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "E:\Anaconda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 364, in _create_inference_session "onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers)) ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

Any help would be great thank you.

LongjonSlim avatar Jul 04 '22 01:07 LongjonSlim

Hello. Try this out.

In any of the swap scripts that your using, find the line that looks like this.

https://github.com/neuralchen/SimSwap/blob/dd1ecdd2a718636d33977ab3097a69a0ecf080d8/test_video_swapsingle.py#L58

Then add this parameter at the end.

app = Face_detect_crop(name='antelope', root='./insightface_func/models', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) #  <--- Add this

Let us know if that works for you.

ExponentialML avatar Jul 04 '22 03:07 ExponentialML

I've had the same problem and I tried adding the providers. However, now I am getting this:

app = Face_detect_crop(name='antelope', root='./insightface_func/models', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) TypeError: init() got an unexpected keyword argument 'providers'

Christosioan avatar Jul 04 '22 15:07 Christosioan

I've had the same problem and I tried adding the providers. However, now I am getting this:

app = Face_detect_crop(name='antelope', root='./insightface_func/models', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) TypeError: init() got an unexpected keyword argument 'providers'

Sorry, missed a step :).

An easier way would be to remove the providers and uninstall onnxruntime-gpu, and follow with pip install onnxruntime==1.8.0

If you want to use a GPU with onnxruntime, you have to open face_detect_crop_single.py and add the providers in the Face_detect_crop class.

At this part:

https://github.com/neuralchen/SimSwap/blob/dd1ecdd2a718636d33977ab3097a69a0ecf080d8/insightface_func/face_detect_crop_single.py#L31

Add in:

def __init__(self, name, root='~/.insightface/models/',providers=None): # Yes None, this is not a typo.

After line 39 and before line 40 (so make a new line),

https://github.com/neuralchen/SimSwap/blob/dd1ecdd2a718636d33977ab3097a69a0ecf080d8/insightface_func/face_detect_crop_single.py#L39

Add in the necessary conditional operations (if else):

 if onnx_file.find('_selfgen_')>0:
                #print('ignore:', onnx_file)
                continue
            if providers is not None:
                model = model_zoo.get_model(onnx_file, providers=providers )
            else:
                model = model_zoo.get_model(onnx_file)
            if model.taskname not in self.models:

Try this and let us know your results.

ExponentialML avatar Jul 04 '22 18:07 ExponentialML

Not the OP, but there are some more changes needed to pass it further upto the onnx inference object.

sshivs avatar Jul 16 '22 01:07 sshivs

I've the same problem and I didn't solve the issue can you specify the other needed changes to make it work. Cause even with ExponentialML corrections I still get the same error.

Mayorc1978 avatar Mar 07 '23 21:03 Mayorc1978