CaffeConTroll
CaffeConTroll copied to clipboard
CaffeConTroll on mobile devices
Will the techniques for speeding up neural networks used in CaffeConTroll (lowering techniques and batching) work well for mobile devices?
In my understanding, the lowering techniques is similar to the im2col function in the caffe.
And the batching is to set a large batch size when training, distributed to CPUs and GPUs dynamically according to their FLOPS. For example, 32 images in one batch, CPUs have 1 GOPS ability and GPUs have 3GOPS, then 32*(1/4) on CPUs, 32*(3/4) on GPUs.
If my understanding is correct, I think the answer to your question is Yes. Just dispatch different ratio of input data in one batch to mobile CPU, GPU or DSP based on the processing ability.