EfficientFormer
EfficientFormer copied to clipboard
What about gpu inference time?
Are there any experiments regarding the speed of efficientformer on gpu devices?
Same. Could you please provide ablation stuides that only use GPU/CPU on iPhone? Depending on the iPhone runtime, when NPU (ANE) is enabled, some operations are not supported and may run outside NPU.
Thanks for your interest in our work.
We are going to add the following speed results in the final manuscript:
- Speed on NVIDIA A100, deployed with TensorRT
- Speed on Google 6, deployed with NNAPI
- Speed on iPhone CPU, deployed with CoreML.
Hi @edwardyehuang, with the latest iOS and Xcode, you could get the latency directly on CPU or CPU & GPU following this link.