jacinto-ai-devkit icon indicating copy to clipboard operation
jacinto-ai-devkit copied to clipboard

about clip layer

Open wuzhiyang2016 opened this issue 4 years ago • 7 comments

hello, when i use quantize training to train a quantized model, the training code will insert clip layer to some layers, there is some problems: 1. for original float model struct: conv + bn + relu + avepooling, it will convert to conv + bn + clip + avepooling + clip, but when use model import tool to convert the model, the tool's code(you can see function code in tidl_mergeClipLayer()) doesn't merge clip layer which after avepooling layer, and raised error "...the model will not work "

  1. the problem happened the same as datalayer + clip

so, users should change the code in tidl_mergeClipLayer() ? or there are something wrong with model structure ?

wuzhiyang2016 avatar Jul 31 '20 07:07 wuzhiyang2016

Hi,

In pytorch-jacinto-ai-devkit: in the file modules/pytorch_jacinto_ai/xnn/quantize/quant_graph_module.py you can see the lines: self.quantize_out_blocks = (torch.nn.ReLU, torch.nn.ReLU6, torch.nn.Hardtanh, layers.QAct, layers.PAct2, layers.AddBlock, layers.CatBlock, layers.MultBlock, torch.nn.MaxPool2d, torch.nn.AvgPool2d)

Please try after removing the torch.nn.AvgPool2d from that list.

Let us know if it works.

mathmanu avatar Jul 31 '20 07:07 mathmanu

ok, i will try that, how to dealwith dataLayer's clip layer ? if a sturct : dataLayer + conv , it will convert to dataLayer + clip +conv, this clip will not be merged too

wuzhiyang2016 avatar Jul 31 '20 08:07 wuzhiyang2016

Are you facing any issue with that clip being there?

mathmanu avatar Jul 31 '20 08:07 mathmanu

I noticed that you said that there was an issue with that clip being there. (We are not facing that issue, but in our case it was dataLayer+BN+Clip because TIDL inserted a BN layer due to inDataNorm)

To avoid that clip, do the following:

In pytorch-jacinto-ai-devkit: in the file modules/pytorch_jacinto_ai/xnn/quantize/quant_graph_module.py in the function: _analyse_connections_op you can see the line: quantize_in = utils.is_conv_deconv_linear(module) and not is_input_quantized and
not is_input_ignored and is_first_module Please change it to: quantize_in = False

Let us know how it goes.

mathmanu avatar Jul 31 '20 08:07 mathmanu

very thanks , i'll try that

wuzhiyang2016 avatar Jul 31 '20 08:07 wuzhiyang2016

Hi @wuzhiyang2016, Which version of TIDL did you get these errors in?

mathmanu avatar Aug 10 '20 14:08 mathmanu

the folder name is tidl_j7_01_00_00_00 @mathmanu

wuzhiyang2016 avatar Aug 13 '20 06:08 wuzhiyang2016