Wei Wen

Results 41 comments of Wei Wen

@weitaoatvison 1. Did you train it from scratch or fine-tune it? It's better to fine-tune 2. Did you try to use a smaller `force_decay`? `force_decay` should vary with your network...

1. Fine-tuning is required to recover accuracy after decomposing. Please do layer-wise timing to verify the bottleneck. The architecture of resnet is very different from alexnet. 2. Not sure how...

ResNet is trained by cifar10 while Googlenet by ImageNet. I recommend to first test how low rank approximation accelerates them without force regularization. If it's promising, then you may use...

@weitaoatvison this is one of the open issues in this work pending to solve as I mentioned [here](https://github.com/wenwei202/caffe#some-open-research-topics). Current strategy is to use a smaller rank ratio. Let me know...

Random sparse neural networks with crs sparse computation are slow. Please use structured sparsity and `conv_mode: LOWERED_CCNMM ` in our deploy protobuf. More details in [tutorials](https://github.com/wenwei202/caffe/blob/scnn/README.md).

The latest version of cudnn I tested on was 5.0, seems the api changed.

@hahne There is no constraint on the kernel size from the algorithm perspective. It works in general (see [here](https://github.com/wenwei202/caffe/blob/master/python/caffe_apps.py#L63-L64)). The code expects [those](https://github.com/wenwei202/caffe/blob/master/python/nn_decomposer.py#L166-L176), only because I did a lazy job...

@hahne I suspect those `assert` is not necessary for linear combination layer, but we need to delete `kernel_h` and `kernel_w` if they are [copied](https://github.com/wenwei202/caffe/blob/master/python/nn_decomposer.py#L163) from decomposed conv layer. Similar thing...

That's the problem I wasn't enable to debug and fix. Let me know if you get chance to fix it. However, the code works well for both sparsity and accuracy.

@bachml We have a simple [tutorial](https://github.com/wenwei202/caffe/blob/sfm/python/README.md) on usage of this code. We will update more details recently.