SR_Mobile_Quantization
SR_Mobile_Quantization copied to clipboard
running time on AI Benchmark App
Hello: I tested 'base7_D4C28_bs16ps64_lr1e-3_qat_time.tflite' running time via AI Benchmark App. My device is Snapdragon 888 and the device's AI score is 54.4. It takes about 200ms NNAPI. In the paper, your device is Snapdragon 820 and it takes ~30ms. Do you have any idea about the running time?
Thanks so much.
It seems that you device doesn't use NNAPI to acclerate the model.
My device is Snapdragon 835. It takes about 700ms NNAPI.
It seems that you device doesn't use NNAPI to acclerate the model.
I enter the PRO Model and select the Custom Model. When I click the run button, the interface turns to the initial page and the operation information of the selected model is not displayed. What's the matter?