Results 20 comments of Wei Wu

@BryantLJ hello, i do not test the cpu/gpu speed of lightened cnn model, when i have time i'll test it. this model is almost the same with caffe, another, there...

memonger should also support fine-tune if set correctly.

sorry, i'm not sure what's wrong with your environment~

hello, @VincentGu11 i have no experience with multi-machines. with N-machines, the batch-size should be set N times larger. BTW, you can try `--kv-store dist_sync_device` or `--kv-store dist_async_device`. @mli do you...

@zhoubinxyz hi, differents gpu has no influence on the training curve,but batch-size influence lots, especially for imanget, which is more bigger than cifar10. i suggest using batch-size in [256, 512]....

no need quality=100, you'd better using 90 or 95(which caffe using, and the default value in MXNet is changed form 80 to 95). another, i'm not sure whether the newest...

the difference between bigger and smaller batch-size will lie in >=95 epoch, so please train more epochs. actually, there is some theoretical about batch-size, it relates to gradient variance, ref...

when test, it based on the statics of training data, not the test batch, because test batch may only own only one sample.

hello, @bruinxiong pls ref to https://github.com/dmlc/mxnet-notebooks/tree/master/python/how_to

please look into code of AnchorLoader