caffe-tensorflow icon indicating copy to clipboard operation
caffe-tensorflow copied to clipboard

how to convert tensorflow model to caffe model?

Open cyh24 opened this issue 8 years ago • 34 comments

How to convert a tensorflow .ckpt to caffe model? Is that possible without .prototxt?

Any help is appreciated.

cyh24 avatar Sep 09 '16 09:09 cyh24

Have the same problem. I want to convert tensorflow model to caffe model. Have you found any way on how to do it?

Dagalaki avatar Dec 14 '16 12:12 Dagalaki

Same problem. If you @cyh24 @Dagalaki find some solutions, please inform me.

soldier828 avatar Dec 27 '16 09:12 soldier828

@ethereon could you provide some advises?

catsdogone avatar Feb 14 '17 02:02 catsdogone

The reverse conversion is fairly similar:

  1. Map TensorFlow ops (or groups of ops) to Caffe layers
  2. Transform parameters to match Caffe's expected format

Things are slightly trickier for step 1 when going from tf to caffe, since the equivalent of a caffe layer might be split into multiple tf sub ops. So pattern matching against the op signatures / scopes might be one approach for tackling this.

For certain ops like convolutions, you can avoid the transformation in step 2 by specifying a Caffe compatible ordering (eg: data_format = NCHW)

ethereon avatar Feb 15 '17 23:02 ethereon

@ethereon @cyh24 Thank you for your help. I am trying to convert inception-resnet-v2 model to caffe, and not sure about the param of BatchNorm layer. Is it right if I make: tfLayer/weights:0 -> caffeLayer_weights, ...[0].data tfLayer/BatchNorm/beta:0 -> caffeLayerScale_bias, ...[1].data tfLayer/BatchNorm/moving_mean:0 -> caffeLayerBn_mean, ...[0].data tfLayer/BatchNorm/moving_variance:0 -> caffeLayerBn_var ? ...[1].data I copy the parameters while the produced caffe model show bad classification results.

catsdogone avatar Feb 21 '17 07:02 catsdogone

@catsdogone I tried the same thing and my activation is off, and cannot get the same accuracy. I also specified the scale parameter in scale layer to 1, and set BatchNorm's moving average factor to 1. :(

Jerryzcn avatar Mar 23 '17 04:03 Jerryzcn

Same issue. Want to convert Tensorflow Inception V3 and ResNet model to Caffe. That will be great!

sskgit avatar Apr 05 '17 16:04 sskgit

Okay, I was able to achieve similar performance after changing the padding for the 1x7 and 7x1 filter to (0,3) and (3,0) instead of (1,2) and (2,1)

Jerryzcn avatar Apr 07 '17 20:04 Jerryzcn

@Jerryzcn @catsdogone Could you share how to transfer model from tensorflow to caffe? rewrite caffe's prototxt from scratch based on tensorflow or write a transfer.py script to achieve it?

nyyznyyz1991 avatar May 19 '17 02:05 nyyznyyz1991

@nyyznyyz1991 I use pycaffe to generate the prototxt based on tensorflow. I cannot share it though.

Jerryzcn avatar Jun 10 '17 21:06 Jerryzcn

@Jerryzcn is it hard to code the conversion script with pycaffe?

neobarney avatar Jun 19 '17 00:06 neobarney

@neobarney took me about 1 week.

Jerryzcn avatar Jun 19 '17 00:06 Jerryzcn

@Jerryzcn wow that's long, you're not planning to release it on github ? would be helpful to lots of peoples ! :)

neobarney avatar Jun 19 '17 02:06 neobarney

@neobarney it should only take u 2-3 days, I spend half a week on figuring out why my activation does not match the original network.

Jerryzcn avatar Jun 20 '17 23:06 Jerryzcn

cool, I'll try then, thanks for sharing Jerry !!!

neobarney avatar Jun 21 '17 03:06 neobarney

@Jerryzcn If I use tf.contrib.layers.batch_norm(input, scale=False) in Tensorflow, the "scale =False" means whether the gamma is zero in "y = gamma*x+beta". the definition of contrib.layers.batch_norm in tensorflow: def batch_norm(inputs, decay=0.999, center=True, scale=False, epsilon=0.001, activation_fn=None, param_initializers=None, param_regularizers=None, updates_collections=ops.GraphKeys.UPDATE_OPS, is_training=True, reuse=None, variables_collections=None, outputs_collections=None, trainable=True, batch_weights=None, fused=False, data_format=DATA_FORMAT_NHWC, zero_debias_moving_mean=False, scope=None, renorm=False, renorm_clipping=None, renorm_decay=0.99): scale: If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling can be done by the next layer.

And how to set the param in batchnormlayer in Caffe to make the result same between tensorflow and caffe.

zmlmanly avatar Jul 03 '17 12:07 zmlmanly

@zmlmanly set scale to 1 on caffe should work

Jerryzcn avatar Jul 04 '17 00:07 Jerryzcn

@Jerryzcn Thank you very much. I’m converting tensorflow to caffe. I use tf.contrib.layers.batch_norm(input, scale=False) in Tensorflow, so there is only beta param in checkpoint, in your view, caffeLayer:scale_layer_gamma=1 caffeLayer:scale_layer_beta=tfLayer/BatchNorm/beta:0 But I can not find the mean and variance in checkpoint, so how to set the mean and variance in caffe?

zmlmanly avatar Jul 04 '17 01:07 zmlmanly

@catsdogone Hi, I want to know how to save the moving_mean and the moving_variance param in tensorflow/batchnormlayer. I have check the param in my trained model of tensorflow, but there is no mean and variance in batchnormlayer. Thank you for your help.

zmlmanly avatar Jul 04 '17 03:07 zmlmanly

@zmlmanly I think I set them either to zero or one. I forgot which exactly.

Jerryzcn avatar Jul 04 '17 21:07 Jerryzcn

@zmlmanly @neobarney Were you able to get it running?

MayankSingal avatar Oct 08 '17 08:10 MayankSingal

Convert batch normalization layer in tensorflow to caffe: 1 batchnorm layer in tf is equivalent to a successive of two layer : batchNorm + Scale: net.params[bn_name][0].data[:] = tf_movingmean # epsilon 0.001 is the default value used by tf.contrib.layers.batch_norm!! net.params[bn_name][1].data[:] = tf_movingvariance + 0.001 net.params[bn_name][2].data[:] = 1 # important, set it to be 1 net.params[scale_name][0].data[:] = tf_gamma net.params[scale_name][1].data[:] = tf_beta

jzhaosc avatar Oct 14 '17 00:10 jzhaosc

@jzhaosc Thx, it helps a lot.

Be careful of the epsilon guys.

lhCheung1991 avatar Oct 31 '17 03:10 lhCheung1991

@catsdogone May I learn if you have successfully converted inception_resnet_v2 from tensorflow to caffe? Thank you.

jiezhicheng avatar Dec 22 '17 07:12 jiezhicheng

when you are using tf.nn.batch_normalization, tf movingvariance + 0.001 ==> caffe batchnorm bias tf movingmean ==> caffe batchnorm weights tf gamma ==> caffe scale weights tf beta ==> caffe scale bias

AddASecond avatar Feb 01 '18 13:02 AddASecond

Maybe you can see another open source library called MMdnn by Microsoft,it is the link:https://github.com/Microsoft/MMdnn

zhongchengyong avatar Mar 07 '18 02:03 zhongchengyong

@giticaniup yes I‘ve already down it with MMdnn, which is much easier and intuitive. The only question exists is how to make the image pre-process in tensorflow equals to that in caffe?

AddASecond avatar Mar 07 '18 03:03 AddASecond

I'm now converting tf mobilenet-v2 to caffe model. I use the protrotxt here (https://github.com/shicai/MobileNet-Caffe) and have converted all the params correctly, but cannot get the same accuracy

Do I miss any detail?

cjerry1243 avatar Apr 20 '18 05:04 cjerry1243

@cjerry1243 I have tried that before thus strongly recommend not to waste time using this repository on training Mobilenet in caffe. This repository uses caffe built-in group convolution, where the depthwise convolution implementation is a non-parallel 'for loop', also do not have a good cuda/cudnn support. The training process was very slow, so that it will take a lot of time to tune hyperparameters. Maybe you should try https://github.com/yonghenglh6/DepthwiseConvolution or other implementations instead.

AddASecond avatar Apr 20 '18 05:04 AddASecond

@bobauditore Thanks for your advice. I still want to convert movilenet-v2 ckpt to caffe model

Except for the different "depthwise convolution"in that repository(https://github.com/shicai/MobileNet-Caffe), I found another problem during conversion.

The first conv layer output values (caffe: net.params['conv1'][0].data and tf: sess.graph.get_tensor_by_name('MobilenetV2/Conv/Conv2D:0')) are different when I feed in the same preprocessed image. The only difference of the image input is channel last and first for tf and caffe.

Besides, I use np.swapaxes to swap tf variables and feed into caffe variables: tf_var_shape: (height, width, depth, channel) caffe_var shape: (channel, depth, height, width)

Where's the mistake of my conversion ?

cjerry1243 avatar Apr 20 '18 07:04 cjerry1243