Mert
Mert
> Wrote a matrix build script to see what model is not supported, all of them are `nlib` and `XLM`. [yoni13/immich_to_rknn2/actions/runs/12892750773](https://github.com/yoni13/immich_to_rknn2/actions/runs/12892750773) Oh wow, that is pretty cool! I have an...
Sorry, just a heads up that I will be away for the next two weeks and won't be able to test it locally or upload the converted models to Hugging...
I got it working on my RK3588 and made some changes to make things cleaner. The unit tests need to be updated and the model export CLI needs to be...
Also, is there something we can do about the NCHW warnings?
For a quick comparison with ARM NN, I ran Locust with ViT-B-32__openai and targeted only the visual model (ARM NN doesn't work for the textual model at the moment). With...
The container stabilizes at around 765MiB for ARM NN when fp16 turbo is enabled, 696MiB when disabled and 1.308GiB for RKNPU. CPU usage is around 70-80% for ARM NN and...
FWIW it only uses 600MiB if `rknn_threads` is set to 1, or 970MiB if set to 2.
> also, wonder if RKNPU can also do FP16 This is with fp16. The model exporter also supports quantization but requires a calibration dataset.
> > I got it working on my RK3588 and made some changes to make things cleaner. The unit tests need to be updated and the model export CLI needs...
I've uploaded the facial recognition models as well as some new SigLIP2 models to HF. I can upload the rest after we confirm everything works as expected with these models.