Deep-Live-Cam icon indicating copy to clipboard operation
Deep-Live-Cam copied to clipboard

M2 mac is so slow

Open linzai1992 opened this issue 1 year ago • 26 comments

is it normal? This is log

2024-08-10 19:07:27.779 python[13724:6838109] WARNING: AVCaptureDeviceTypeExternal is deprecated for Continuity Cameras. Please use AVCaptureDeviceTypeContinuityCamera and add NSCameraUseContinuityCameraDeviceType to your Info.plist.
Frame processor face_enhancer not found
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/yunlinchen/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/yunlinchen/.insightface/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/yunlinchen/.insightface/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/yunlinchen/.insightface/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/yunlinchen/.insightface/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
inswapper-shape: [1, 3, 128, 128]

linzai1992 avatar Aug 10 '24 11:08 linzai1992

that's normal, it still use CPU I believe

RoversX avatar Aug 10 '24 12:08 RoversX

Why it's still using CPU? The Guide has packages for Apple Silicon installed.

xunsheng avatar Aug 10 '24 13:08 xunsheng

Why it's still using CPU? The Guide has packages for Apple Silicon installed.

I am not sure but when I check the cpu/gpu history, it seems like the model still running on cpu

RoversX avatar Aug 10 '24 13:08 RoversX

@hacksider Has this been tested on Mac's?

oefterdal avatar Aug 10 '24 14:08 oefterdal

0% usage of GPU on Mac A1. Speed is 1.5 frames/s.

I used commands below for Apple Silicon Installation - the Readme can be updated:

# Comment this first try, which Doesn't work
# CONDA_SUBDIR=osx-arm64 conda create -y -n deep_live_cam -c conda-forge python=3.10

# This one works for Apple Silicon conda installation
conda config --set subdir osx-arm64

# Follow guide to update
pip uninstall onnxruntime onnxruntime-silicon
pip install onnxruntime-silicon==1.13.1

# To avoide error 
# /miniforge/base/envs/deep_live_cam_arm64/lib/python3.10/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e' or 'arm64'))
pip uninstall insightface
arch -arm64 pip install --no-cache-dir insightface==0.7.3

python run.py --execution-provider coreml

xunsheng avatar Aug 10 '24 21:08 xunsheng

CoreML is not recommended for this model inswapper_128_fp16.onnx. So it falls back to CPU? Screenshot 2024-08-10 at 3 15 47 PM

segg avatar Aug 10 '24 22:08 segg

ChatGPT suggests convert ONNX model to CoreML. Is that possible to insert some codes to do that?

import onnx
from onnx_coreml import convert

# Load the ONNX model
onnx_model = onnx.load("your_model.onnx")

# Convert to CoreML model
coreml_model = convert(model=onnx_model, minimum_ios_deployment_target='13')

# Save the CoreML model
coreml_model.save("your_model.mlmodel")

xunsheng avatar Aug 10 '24 22:08 xunsheng

let me know if there is optimized setup for apple silicon 😹

0xrsydn avatar Aug 11 '24 13:08 0xrsydn

Just tried it, and Activity Monitor show 450% on CPU whilst using this, and nothing on GPU. Seems like GPU isn't being used on Mac

storizzi avatar Aug 11 '24 15:08 storizzi

I tried converting the onnx model to coreml but it didn't work, might be some version compatible issue.

sieglu2 avatar Aug 12 '24 04:08 sieglu2

pip install onnxruntime-silicon==1.13.1 <= This version does not work for me. It's always on CPU.

Instead, simply run pip install -r requirements.txt and change the model from inswapper_128_fp16.onnx to inswapper_128.onnx, ~~then it's on GPU~~.

Update: Not sure whether it's on GPU or CPU... since both utilizations are lower than 100%. But it does help to the fps, like 1fps ➞ 10fps.

gongzhang avatar Aug 12 '24 04:08 gongzhang

pip install onnxruntime-silicon==1.13.1 <= This version does not work for me. It's always on CPU.

Instead, simply run pip install -r requirements.txt and change the model from inswapper_128_fp16.onnx to inswapper_128.onnx, then it's on GPU.

where this path inswapper_128.onnx model . Thanks in advance.

ThanhNguye-n avatar Aug 12 '24 04:08 ThanhNguye-n

pip install onnxruntime-silicon==1.13.1 <= This version does not work for me. It's always on CPU. Instead, simply run pip install -r requirements.txt and change the model from inswapper_128_fp16.onnx to inswapper_128.onnx, then it's on GPU.

where this path inswapper_128.onnx model . Thanks in advance.

https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128.onnx Just change the path in the readme

sieglu2 avatar Aug 12 '24 04:08 sieglu2

pip install onnxruntime-silicon==1.13.1 <= This version does not work for me. It's always on CPU.

Instead, simply run pip install -r requirements.txt and change the model from inswapper_128_fp16.onnx to inswapper_128.onnx, then it's on GPU.

It's definitely much much better but still not as fast as the demo video in the repo, maybe because my macbook pro max 32G is not good enough? @hacksider thanks for the great work!

sieglu2 avatar Aug 12 '24 05:08 sieglu2

@sieglu2 do i need to uninstall version 1.13.1 of onnxruntime like gongzhang said ? If yes, which version I should install ?

Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
inswapper-shape: [1, 3, 128, 128]```

ThanhNguye-n avatar Aug 12 '24 05:08 ThanhNguye-n

@sieglu2 do i need to uninstall version 1.13.1 of onnxruntime like gongzhang said ? If yes, which version I should install ?

Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/thanhnguyen/.insightface/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
inswapper-shape: [1, 3, 128, 128]```

onnxruntime-silicon-1.16.3 maybe, but I found the quality of deepfake decrease, good news is pretty much zero performance consuming

RoversX avatar Aug 12 '24 05:08 RoversX

pip install onnxruntime-silicon==1.13.1 <= This version does not work for me. It's always on CPU. Instead, simply run pip install -r requirements.txt and change the model from inswapper_128_fp16.onnx to inswapper_128.onnx, then it's on GPU.

where this path inswapper_128.onnx model . Thanks in advance.

https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128.onnx Just change the path in the readme

I had tried this and it's faster than before(from ~1.5frames/s —>~10 frames/s ), but it seems still run on CPU?

chunzha1 avatar Aug 12 '24 07:08 chunzha1

pip install onnxruntime-silicon==1.13.1 <= This version does not work for me. It's always on CPU. Instead, simply run pip install -r requirements.txt and change the model from inswapper_128_fp16.onnx to inswapper_128.onnx, then it's on GPU.

where this path inswapper_128.onnx model . Thanks in advance.

https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128.onnx Just change the path in the readme

I had tried this and it's faster than before(from ~1.5frames/s —>~10 frames/s ), but it seems still run on CPU?

Same result here (~10 fps). Sorry about the confusion of my previous reply... It doesn't fully utilize the GPU, maybe not at all. And the CPU is also lower than 100%.

gongzhang avatar Aug 12 '24 07:08 gongzhang

Same here, GPU usage on the activity monitor only shows 20% when running. How can we check which provider is actually being used?

rbrunan avatar Aug 12 '24 08:08 rbrunan

With that model (inswapper_128.onnx), it shows:

Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
inswapper-shape: [1, 3, 128, 128]

Not sure which of both is using. The GPU shows 20-25% of usage. But it's still very slow.

The GPU is:


Chipset Model:	Apple M1 Max
  Type:	GPU
  Bus:	Built-In
  Total Number of Cores:	32

rbrunan avatar Aug 12 '24 09:08 rbrunan

Using CPU:image Using CoreML: image It seems that onnx do not support coreML with gpu. So it's using coreML with CPU(faster than CPU) image https://github.com/microsoft/onnxruntime

chunzha1 avatar Aug 12 '24 09:08 chunzha1

I can't get it to work with inswapper_128.onnx, it still looking for inswapper_128_fp16.onnx. How do I make it switch?

Traceback (most recent call last):
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/tkinter/__init__.py", line 1921, in __call__
    return self.func(*args)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/customtkinter/windows/widgets/ctk_button.py", line 554, in _clicked
    self._command()
  File "/Users/admin/scripts/Deep-Live-Cam/modules/ui.py", line 104, in <lambda>
    live_button = ctk.CTkButton(root, text='Live', cursor='hand2', command=lambda: webcam_preview())
  File "/Users/admin/scripts/Deep-Live-Cam/modules/ui.py", line 286, in webcam_preview
    temp_frame = frame_processor.process_frame(source_image, temp_frame)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 60, in process_frame
    temp_frame = swap_face(source_face, target_face, temp_frame)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 48, in swap_face
    return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 43, in get_face_swapper
    FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 452, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/admin/scripts/Deep-Live-Cam/models/inswapper_128_fp16.onnx failed:Protobuf parsing failed.
^C%

vashat avatar Aug 12 '24 10:08 vashat

I can't get it to work with inswapper_128.onnx, it still looking for inswapper_128_fp16.onnx. How do I make it switch?

Traceback (most recent call last):
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/tkinter/__init__.py", line 1921, in __call__
    return self.func(*args)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/customtkinter/windows/widgets/ctk_button.py", line 554, in _clicked
    self._command()
  File "/Users/admin/scripts/Deep-Live-Cam/modules/ui.py", line 104, in <lambda>
    live_button = ctk.CTkButton(root, text='Live', cursor='hand2', command=lambda: webcam_preview())
  File "/Users/admin/scripts/Deep-Live-Cam/modules/ui.py", line 286, in webcam_preview
    temp_frame = frame_processor.process_frame(source_image, temp_frame)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 60, in process_frame
    temp_frame = swap_face(source_face, target_face, temp_frame)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 48, in swap_face
    return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 43, in get_face_swapper
    FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 452, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/admin/scripts/Deep-Live-Cam/models/inswapper_128_fp16.onnx failed:Protobuf parsing failed.
^C%

I had to change the hardcoded path in https://github.com/hacksider/Deep-Live-Cam/blob/fff3009c806b79d001c03b777c788d26a84d765f/modules/processors/frame/face_swapper.py#L42

rbrunan avatar Aug 12 '24 11:08 rbrunan

我无法与 inswapper_128.onnx 斗用,很快就能找到 inswapper_128_fp16.onnx。我能否切换?

Traceback (most recent call last):
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/tkinter/__init__.py", line 1921, in __call__
    return self.func(*args)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/customtkinter/windows/widgets/ctk_button.py", line 554, in _clicked
    self._command()
  File "/Users/admin/scripts/Deep-Live-Cam/modules/ui.py", line 104, in <lambda>
    live_button = ctk.CTkButton(root, text='Live', cursor='hand2', command=lambda: webcam_preview())
  File "/Users/admin/scripts/Deep-Live-Cam/modules/ui.py", line 286, in webcam_preview
    temp_frame = frame_processor.process_frame(source_image, temp_frame)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 60, in process_frame
    temp_frame = swap_face(source_face, target_face, temp_frame)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 48, in swap_face
    return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 43, in get_face_swapper
    FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 452, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/admin/scripts/Deep-Live-Cam/models/inswapper_128_fp16.onnx failed:Protobuf parsing failed.
^C%

我不得不改变硬编码路径

https://github.com/hacksider/Deep-Live-Cam/blob/fff3009c806b79d001c03b777c788d26a84d765f/modules/processors/frame/face_swapper.py#L42

just change inswapper_128.onnx to inswapper_128_fp16.onnx, fps up!

MrDongProjects avatar Aug 12 '24 12:08 MrDongProjects

我无法与 inswapper_128.onnx 斗用,很快就能找到 inswapper_128_fp16.onnx。我能否切换?

Traceback (most recent call last):
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/tkinter/__init__.py", line 1921, in __call__
    return self.func(*args)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/customtkinter/windows/widgets/ctk_button.py", line 554, in _clicked
    self._command()
  File "/Users/admin/scripts/Deep-Live-Cam/modules/ui.py", line 104, in <lambda>
    live_button = ctk.CTkButton(root, text='Live', cursor='hand2', command=lambda: webcam_preview())
  File "/Users/admin/scripts/Deep-Live-Cam/modules/ui.py", line 286, in webcam_preview
    temp_frame = frame_processor.process_frame(source_image, temp_frame)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 60, in process_frame
    temp_frame = swap_face(source_face, target_face, temp_frame)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 48, in swap_face
    return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
  File "/Users/admin/scripts/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 43, in get_face_swapper
    FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/admin/miniconda3/envs/deeplivecam/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 452, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/admin/scripts/Deep-Live-Cam/models/inswapper_128_fp16.onnx failed:Protobuf parsing failed.
^C%

我不得不改变硬编码路径 https://github.com/hacksider/Deep-Live-Cam/blob/fff3009c806b79d001c03b777c788d26a84d765f/modules/processors/frame/face_swapper.py#L42

just change inswapper_128.onnx to inswapper_128_fp16.onnx, fps up!

I unfortunately stop seeing the face being swapped in most frames (although the frame rate is much better!), and inswapper seem to auto-download some additional files:

download_path: /Users/xxx/.insightface/models/buffalo_l Downloading /Users/xxx/.insightface/models/buffalo_l.zip from https://github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip...

find model: /Users/xxx/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68

Any ideas?

storizzi avatar Aug 13 '24 16:08 storizzi

I discovered that there is some acceleration going on by the ANE (NPU) when using coreml (instead of GPU). Used asitop utility to measure it. It is utilized at 25% when running, so not fully utilized. Since framerate is still low even with the fix above, there must be some other bottleneck somewhere that hinders full usage of ANE?

vashat avatar Aug 14 '24 06:08 vashat