CHaiDNN
CHaiDNN copied to clipboard
Quantization of MobileNet-SSD fails
Hi guys, I've read #53 and #42 but I cannot quantize MobileNet-SSD.
Link: https://github.com/chuanqi305/MobileNet-SSD
This is what I've done: 1- Download prototxt: https://drive.google.com/file/d/0B3gersZ2cHIxWGEzbG5nSXpNQzA/view 2- Download caffe model: https://drive.google.com/open?id=0B3gersZ2cHIxRm5PMWRoTkdHdHc 3- Run XportDNN.pyc:
python XportDNN.pyc --quant_type "Xilinx" \
--deploy_model ./mobilenet/MobileNet_deploy.prototxt \
--weights ./mobilenet/MobileNetSSD_deploy.caffemodel \
--quantized_deploy_model ./mobilenet/quantized_deploy.prototxt \
--calibration_directory ./xilinx_quant_example_model/sample_calibration_dataset/ --calibration_size 10 \
--bitwidths 6,6,6 --dims 3,300,300 --transpose 2,0,1 \
--channel_swap 2,1,0 --raw_scale 255.0 \
--mean_value 127.5 --input_scale 0.007843
Please find attached both prototxt and caffemodel.
Error:
F1205 09:58:58.985906 29455 net.cpp:813] Check failed: target_blobs.size() == source_layer.blobs_size() (1 vs. 2) Incompatible number of blobs for layer conv0
*** Check failure stack trace: ***
Abortado (`core' generado)
What am I doing wrong?
Thanks in advance!
@salcanmor Have you modified the first layer in prototxt as below?
name: "MobileNet-SSD" layer { name: "data" type: "Input" top: "data" input_param { shape { dim: 1 dim: 3 dim: 300 dim: 300 } } }
@salcanmor Have you modified the first layer in prototxt as below?
name: "MobileNet-SSD" layer { name: "data" type: "Input" top: "data" input_param { shape { dim: 1 dim: 3 dim: 300 dim: 300 } } }
I've tried that with no luck. See prototxt attached. Did you manage to make the quantization of MobileNet-SSD work?
@salcanmor I made the quantization of Mobilenet-SSD and can run MobilenetSSD on CHaiDNN but the accuracy is too bad. I think depthwise conv was not supported, I am waiting for the help
@salcanmor I made the quantization of Mobilenet-SSD and can run MobilenetSSD on CHaiDNN but the accuracy is too bad. I think depthwise conv was not supported, I am waiting for the help
Can you please share the model and prototxt you used? Thanks in advance!
Prototxt and caffemodel I cloned from here https://github.com/chuanqi305/MobileNet-SSD
Prototxt and caffemodel I cloned from here https://github.com/chuanqi305/MobileNet-SSD
If we are using the same Prototxt and caffemodel, why do I get an error and you not ?
Can you try this?
python XportDNN.pyc --quant_type "Xilinx" --deploy_model ./models/MobilenetSSD_300_deploy.prototxt --weights ./models/MobilenetSSD_300_deploy.caffemodel --quantized_deploy_model ./models/MobilenetSSD_300_quantized_deploy.prototxt --calibration_directory ./data/VOC0712 --calibration_size 32 --bitwidths 8,8,8 --dims 3,300,300 --transpose 2,0,1 --channel_swap 2,1,0 --raw_scale 255.0 --mean_value 127.5,127.5,127.5 --input_scale 0.007843
Can you try this?
python XportDNN.pyc --quant_type "Xilinx" --deploy_model ./models/MobilenetSSD_300_deploy.prototxt --weights ./models/MobilenetSSD_300_deploy.caffemodel --quantized_deploy_model ./models/MobilenetSSD_300_quantized_deploy.prototxt --calibration_directory ./data/VOC0712 --calibration_size 32 --bitwidths 8,8,8 --dims 3,300,300 --transpose 2,0,1 --channel_swap 2,1,0 --raw_scale 255.0 --mean_value 127.5,127.5,127.5 --input_scale 0.007843
I'm gettting the same error:
F1210 09:14:50.208392 27097 net.cpp:813] Check failed: target_blobs.size() == source_layer.blobs_size() (1 vs. 2) Incompatible number of blobs for layer conv1/dw
*** Check failure stack trace: ***
Abortado (`core' generado)
@salcanmor I made the quantization of Mobilenet-SSD and can run MobilenetSSD on CHaiDNN but the accuracy is too bad. I think depthwise conv was not supported, I am waiting for the help
Hi, can you please share the FPS of MobilenetSSD running on ZCU102? Thanks a lot~
@cuongdv1 ,
Hi, can you please share the FPS of MobilenetSSD running on ZCU102?
Thanks a lot~
30fps, but the accuracy is too bad. i think it did not run full network
Thank you! Maybe the bad accuracy is caused by the Xilinx quantization, it is not good enough to quantize the light models like mobilenet......
@cuongdv1 Did you use deploy.prototxt and mobilenet_iter_73000.caffemodel as inputs ?