jnulzl

Results 16 issues of jnulzl

@shicai 如题所示,我在你的基础上进行模型微调发现训练速度好慢,将Conv+group换成[DepthwiseConvolution](https://github.com/yonghenglh6/DepthwiseConvolution)后,速度瞬间提升10倍左右!

如题,@ZhaoJ9014 你好,是否可以添加DeepGlint的百度云链接??

@eric612 Hi,you only provide lmdb data. But I want to know how to prepare my own train data?

Hello, @liuziwei7 I want to train my model use my dataset and use your pre-trained model to predict ,so I need the train_val.prototxt solver.prototxt deploy.prototxt files. Can you commit these...

@huangyangyu,Hi if has the pretrained model?

@farmingyard Hello, I think "caffe_gpu_set" should be "caffe_set" in line 125 and 139 in conv_dw_layer.cpp at the same time, "mutable_gpu_data" should "mutable_cpu_data" https://github.com/farmingyard/caffe-mobilenet/blob/f7d7d130761727560f201c1ff5c274e938888b5f/conv_dw_layer.cpp#L125

https://github.com/happynear/caffe-windows/blob/2e9ade3075d86321342966ed0aa2961031a06daf/src/caffe/layers/inner_product_layer.cpp#L114-L117 @happynear 如题所示,貌似在处理weight归一化时只考虑了非transpose的情况(此时weight的维数为:NxK),但是weight进行transpose的情况(此时weight的维数为:KxN)好像不能用相同的代码进行处理了。 weight归一化不应该是对每个(共N个,为输出类别数目)长度为K的weight向量进行归一化吗???

https://github.com/happynear/caffe-windows/blob/2e9ade3075d86321342966ed0aa2961031a06daf/src/caffe/layers/inner_product_layer.cpp#L141-L165 @happynear 你好,既然在Forward中weight进行了归一化:Y=WX/||W||,那么在Backward中weight的导数计算方式怎么跟没有进行归一化(Y=WX)时一样,都是bottom_data*top_diff (for transpose top_diff )或者top_diff *bottom_data(for no transpose top_diff),这样计算是为了工程上处理方便,还是有别的特殊考虑??

@huangyangyu Hi, your "example/train_val.prototxt" has the "SmoothMarginInnerProduct" layer , but your caffe hasn't it.

Hi @reshow In file augmentation.py, there is a **rotateData** function, whose input are x, y, ..... . Obviously, x is the network input image, y is the uv_position_map(or target, label,...