Sergey Zhdanov
Sergey Zhdanov
@JierunChen Hello, do you have a workaround for this problem?
Hello, have you been able to resolve this issue? @leovinus2001
@leovinus2001 Could you, please, share the code for "dim/shape/getitem/inverse". Those are still not implemented in version 4.1.
@AliaksandrSiarohin Are there some settings/arguments in generator, for example, that could be tweaked (even giving up the result quality) to boost the performance?
> I never tried this, I guess tricks with replacing standard blocks with mobile net blocks should not hurt the performance much. @AliaksandrSiarohin Thank you for a rapid reply. Could...
Well, it is possible to run the model on mobile (iOS at least) but needs some optimisation. I was able to get 2 sec per frame on iPhone X Max.
@lshug magnificent work 🚀 Have you tested it on a mobile device? @urmit-space I was able to run the original model converted to TorchScript on mobile using **LibTorch** - only...
@Ishug Unfortunately, only `kp_detector` **SavedModel** converted to CoreML model (same for PyTorch's `kp_detector`) ``` import coremltools as ct name = 'kp_detector' model = ct.convert( './saved_models/vox-adv/' + name, source="tensorflow" ) model.save(name...
@lshug What fps did you get on a mobile device? What are the device specs (iOS/Android, device model etc.)?
@lshug well here is my savedmodel>onnx>mlmodel pipeline: I'm using [tensorflow-onnx converter](https://github.com/onnx/tensorflow-onnx) ### savedmodel>onnx `python -m tf2onnx.convert --saved-model "./saved_models/vox-adv/generator" --opset 9 --output generator.onnx --verbose` `(--opset 9 is default)` **Output:** ``` /Users/SERG/opt/anaconda3/envs/fom/lib/python3.8/runpy.py:127:...