model-optimization icon indicating copy to clipboard operation
model-optimization copied to clipboard

How to extract original model's weight from trained QAT model?

Open doomooo opened this issue 2 years ago • 9 comments

System information

TensorFlow version (you are using): 2.8 Are you willing to contribute it (Yes/No): Motivation

How to extract the original model's weight from the trained QAT model? I want to extract the original model's weight and QAT params, so I can set int8 params in TensorRT manually.

doomooo avatar Jun 06 '22 09:06 doomooo

Hi, I wonder how to extract the original model's weight from the trained QAT model? @thaink

doomooo avatar Jun 06 '22 09:06 doomooo

Anyone can help take a look? Thanks! @thaink @rino20

doomooo avatar Jun 07 '22 03:06 doomooo

Isn't the original model weights in checkpoint files or variables/ directory?

thaink avatar Jun 07 '22 06:06 thaink

Thanks for your reply! @thaink No, the weight is in quant_warp, and the name is changed.

doomooo avatar Jun 07 '22 07:06 doomooo

I can extract the weights with some codes or tricks, but I wonder if there exist official methods. @thaink

doomooo avatar Jun 07 '22 07:06 doomooo

I don't think we have an official way for that. This use with TensorRT isn't in our set of use-cases for now.

thaink avatar Jun 07 '22 07:06 thaink

@Xhark could you review this thread? Thanks

inho9606 avatar Jun 14 '22 01:06 inho9606

    model: original model
    model_q: quantized model

    for ly in model.layers:
        name = 'quant_' + ly.name
        if len(ly.variables) > 0:
            lly = model_q.get_layer(name)
            for var in ly.variables:
                for vvar in lly.variables:
                    if var.name == vvar.name:
                        var.assign(tf.Variable(vvar.numpy()))
                        break

WillLiGitHub avatar Jul 07 '22 08:07 WillLiGitHub

@WillLiGitHub Thanks! I have extracted weights in this way.

doomooo avatar Jul 07 '22 09:07 doomooo