Fcucgvhhhvjv
Fcucgvhhhvjv
i have lots of onnx quantized int 4 fp16 onnx model in my hugging repo which works for this repo https://github.com/ZTMIDGO/Android-Stable-diffusion-ONNX/ . How can i use those models with this...
@vitoplantamura Why this command faisl to bind library and doesnt build the binary in termux cmake -DMAX_SPEED=ON -DXNNPACK_DIR= .. cmake --build . --config Release But this works cmake -DXNNPACK_DIR=$HOME/XNNPACK .....
@vitoplantamura thanks for testing it out . Also original linux environment should be faster than termux right? Does the inference speed depend highly upon no of threads? I have noticed...
@romanovj with cmake in termux environment on android 13 SD 860 i got around 40 second per iteration . So inference is like 3-5 minutes with ./sd . On termux...
> interesting Ohh that explains it , colab and other cpu provider have 4 threads at max .