Torch-Pruning
Torch-Pruning copied to clipboard
Are quantized networks supported?
Hi, I'm curious to know whether the quantized version of networks are supported, as today I tried that and faced this issue :
QuantizedResnet18 took 35.105 ms [min/max: 35.1/35.1] ms for one forward pass!
Size (MB): 22.23 (initial 87.9)
Number of Parameters: 0.0M
normal resnet took 3624.206 ms [min/max: 3624.2/3624.2] ms
start of pruning...
Traceback (most recent call last):
File "d:\Codes\face\python\FV\Pruning\prune.py", line 91, in <module>
model = prune_model(model)
File "d:\Codes\face\python\FV\Pruning\prune.py", line 76, in prune_model
prune_conv( m.conv1, block_prune_probs[blk_id] )
File "d:\Codes\face\python\FV\Pruning\prune.py", line 58, in prune_conv
weight = conv.weight.detach().cpu().numpy()
AttributeError: 'function' object has no attribute 'detach'
Seens like quantized operators are not supported. Is it true or am I missing sth? Thanks in advance
Hi @Coderx7 , quantized models are not supported in this package.