simswap-inference-pytorch icon indicating copy to clipboard operation
simswap-inference-pytorch copied to clipboard

ашду onnxruntime_pybind11_state.pyd addresses the certain local files:

Open SerZhyAle opened this issue 1 year ago • 0 comments

D:\a_work\1\s\onnxruntime\python\onnxruntime

tried with cuda:

(myenv) c:_N\simswap-inference-pytorch>streamlit run app_web.py

You can now view your Streamlit app in your browser.

Local URL: http://localhost:8501 Network URL: http://192.168.1.70:8501

A new version of Streamlit is available.

See what's new at https://discuss.streamlit.io/c/announcements

Enter the following command to upgrade: $ pip install streamlit --upgrade

2023-01-09 04:05:13.8153921 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" 2023-01-09 04:05:13.8601772 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" 2023-01-09 04:05:13.8770375 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" 2023-01-09 04:05:13.867 Uncaught app exception Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 589, in get_or_create_cached_value return_value = _read_from_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 349, in _read_from_cache raise e File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 334, in _read_from_cache return _read_from_mem_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 252, in _read_from_mem_cache raise CacheKeyNotFoundError("Key not found in mem cache") streamlit.runtime.legacy_caching.caching.CacheKeyNotFoundError: Key not found in mem cache

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script exec(code, module.dict) File "C:_N\simswap-inference-pytorch\app_web.py", line 157, in model = load_model(config) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 623, in wrapped_func return get_or_create_cached_value() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 607, in get_or_create_cached_value return_value = non_optional_func(*args, **kwargs) File "C:_N\simswap-inference-pytorch\app_web.py", line 111, in load_model return SimSwap( File "c:_N\simswap-inference-pytorch\src\simswap.py", line 64, in init self.face_detector = get_model( File "c:_N\simswap-inference-pytorch\src\model_loader.py", line 97, in get_model model = models[model_name].model(**kwargs) File "c:_N\simswap-inference-pytorch\src\FaceDetector\face_detector.py", line 26, in init self.handler = model_zoo.get_model(str(model_path)) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 381, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:548 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

2023-01-09 04:05:13.885 Uncaught app exception Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 589, in get_or_create_cached_value return_value = _read_from_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 349, in _read_from_cache raise e File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 334, in _read_from_cache return _read_from_mem_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 252, in _read_from_mem_cache raise CacheKeyNotFoundError("Key not found in mem cache") streamlit.runtime.legacy_caching.caching.CacheKeyNotFoundError: Key not found in mem cache

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script exec(code, module.dict) File "C:_N\simswap-inference-pytorch\app_web.py", line 157, in model = load_model(config) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 623, in wrapped_func return get_or_create_cached_value() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 607, in get_or_create_cached_value return_value = non_optional_func(*args, **kwargs) File "C:_N\simswap-inference-pytorch\app_web.py", line 111, in load_model return SimSwap( File "c:_N\simswap-inference-pytorch\src\simswap.py", line 64, in init self.face_detector = get_model( File "c:_N\simswap-inference-pytorch\src\model_loader.py", line 97, in get_model model = models[model_name].model(**kwargs) File "c:_N\simswap-inference-pytorch\src\FaceDetector\face_detector.py", line 26, in init self.handler = model_zoo.get_model(str(model_path)) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 381, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:548 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

2023-01-09 04:05:13.885 Uncaught app exception Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 589, in get_or_create_cached_value return_value = _read_from_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 349, in _read_from_cache raise e File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 334, in _read_from_cache return _read_from_mem_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 252, in _read_from_mem_cache raise CacheKeyNotFoundError("Key not found in mem cache") streamlit.runtime.legacy_caching.caching.CacheKeyNotFoundError: Key not found in mem cache

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script exec(code, module.dict) File "C:_N\simswap-inference-pytorch\app_web.py", line 157, in model = load_model(config) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 623, in wrapped_func return get_or_create_cached_value() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 607, in get_or_create_cached_value return_value = non_optional_func(*args, **kwargs) File "C:_N\simswap-inference-pytorch\app_web.py", line 111, in load_model return SimSwap( File "c:_N\simswap-inference-pytorch\src\simswap.py", line 64, in init self.face_detector = get_model( File "c:_N\simswap-inference-pytorch\src\model_loader.py", line 97, in get_model model = models[model_name].model(**kwargs) File "c:_N\simswap-inference-pytorch\src\FaceDetector\face_detector.py", line 26, in init self.handler = model_zoo.get_model(str(model_path)) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 381, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:548 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

2023-01-09 04:05:14.9111703 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" 2023-01-09 04:05:14.919 Uncaught app exception Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 589, in get_or_create_cached_value return_value = _read_from_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 349, in _read_from_cache raise e File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 334, in _read_from_cache return _read_from_mem_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 252, in _read_from_mem_cache raise CacheKeyNotFoundError("Key not found in mem cache") streamlit.runtime.legacy_caching.caching.CacheKeyNotFoundError: Key not found in mem cache

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script exec(code, module.dict) File "C:_N\simswap-inference-pytorch\app_web.py", line 157, in model = load_model(config) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 623, in wrapped_func return get_or_create_cached_value() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 607, in get_or_create_cached_value return_value = non_optional_func(*args, **kwargs) File "C:_N\simswap-inference-pytorch\app_web.py", line 111, in load_model return SimSwap( File "c:_N\simswap-inference-pytorch\src\simswap.py", line 64, in init self.face_detector = get_model( File "c:_N\simswap-inference-pytorch\src\model_loader.py", line 97, in get_model model = models[model_name].model(**kwargs) File "c:_N\simswap-inference-pytorch\src\FaceDetector\face_detector.py", line 26, in init self.handler = model_zoo.get_model(str(model_path)) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 381, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:548 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

SerZhyAle avatar Jan 09 '23 03:01 SerZhyAle