yehao

Results 98 comments of yehao

You can use (http://ethereon.github.io/netscope/#/editor) to view peleenet-ssd network structure, you will find stage4_tb/ext/pm2/res layer is used twice to generate ext/pm1_mbox_loc layer and ext/pm2_mbox_loc layer(conf layer/priorbox layer is also the case)....

mobile-net ssd also follows this design pattern, I will update later.

No, batch size can affect training time and has no direct relation with model performance.

You can look [it](https://www.zhihu.com/question/32673260)

@foralliance model performance 依赖于模型本身,batch size只是一个超参而已,如果你修改了batch size,再选定其它合适的超参比如base_lr,它最终的效果是一样的,一般而言,使用一个大batch只是训练的更快,更容易出结果,并不会从根本上决定模型的性能

@foralliance 我看了这个回答,默认accum_batch_size固定,对于无BN层/BN层参数固定情况下,batch size不影响模型性能这个观点我是认同的

You download a wrong link. I guess you download pelenet_inet_acc7243.caffemodel, It is a classification model rather than detection model.

@lucheng07082221 you can test it by yourself.

what is c++ implementation? Do you refer to peleenet network or framework(like caffe or ncnn)?

@Robert-JunWang Hi, I have two problems: - I found that you have uploaded a merged model, what is the principle of this? merged BN with convolution layer change original convolution...