nncf
nncf copied to clipboard
list index out of range problem in graph.get_output_edges()
Hello, I'm trying to compress the faster-rcnn model of mmdetection. However, I find here is a out of range problem at the begin of training. It is located at:
for node in graph.get_nodes_by_types([v.op_func_name for v in NNCF_GENERAL_CONV_MODULES_DICT]):
out_edge = graph.get_output_edges(node)[0]
I debug it and find graph.get_output_edges returns an empty list. What should I do? Here is the whole error message and the model architecture:
Traceback (most recent call last):
File "/data/dy/pycharm-community-2021.2.2/plugins/python-ce/helpers/pydev/pydevd.py", line 1483, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/data/dy/pycharm-community-2021.2.2/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/data/dy/code/mmdetection-ote/tools/train.py", line 339, in <module>
main()
File "/data/dy/code/mmdetection-ote/tools/train.py", line 335, in main
meta=meta)
File "/data/dy/code/mmdetection-ote/mmdet/apis/train.py", line 103, in train_detector
compression_ctrl, model = wrap_nncf_model(model, cfg, data_loaders[0], get_fake_input)
File "/data/dy/code/mmdetection-ote/mmdet/integration/nncf/compression.py", line 223, in wrap_nncf_model
compression_state=compression_state)
File "/data/dy/anaconda3/envs/ote/lib/python3.7/site-packages/nncf/torch/model_creation.py", line 146, in create_compressed_model
compression_ctrl = builder.build_controller(compressed_model)
File "/data/dy/anaconda3/envs/ote/lib/python3.7/site-packages/nncf/torch/compression_method_api.py", line 163, in build_controller
ctrl = self._build_controller(model)
File "/data/dy/anaconda3/envs/ote/lib/python3.7/site-packages/nncf/torch/pruning/filter_pruning/algo.py", line 100, in _build_controller
self.config)
File "/data/dy/anaconda3/envs/ote/lib/python3.7/site-packages/nncf/torch/pruning/filter_pruning/algo.py", line 137, in __init__
self.flops_count_init()
File "/data/dy/anaconda3/envs/ote/lib/python3.7/site-packages/nncf/torch/pruning/filter_pruning/algo.py", line 259, in flops_count_init
out_edge = graph.get_output_edges(node)[0]
IndexError: list index out of range
FasterRCNN(
(backbone): ResNet(
(conv1): NNCFConv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): ResLayer(
(0): Bottleneck(
(conv1): NNCFConv2d(
64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): NNCFConv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): NNCFConv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): NNCFConv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer2): ResLayer(
(0): Bottleneck(
(conv1): NNCFConv2d(
256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): NNCFConv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): NNCFConv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): NNCFConv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): NNCFConv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer3): ResLayer(
(0): Bottleneck(
(conv1): NNCFConv2d(
512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): NNCFConv2d(
512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(4): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(5): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(6): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(7): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(8): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(9): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(10): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(11): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(12): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(13): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(14): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(15): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(16): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(17): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(18): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(19): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(20): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(21): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(22): Bottleneck(
(conv1): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer4): ResLayer(
(0): Bottleneck(
(conv1): NNCFConv2d(
1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): NNCFConv2d(
1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): NNCFConv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): NNCFConv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): NNCFConv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): NNCFConv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
)
(neck): FPN(
(lateral_convs): ModuleList(
(0): ConvModule(
(conv): NNCFConv2d(
256, 256, kernel_size=(1, 1), stride=(1, 1)
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
)
(1): ConvModule(
(conv): NNCFConv2d(
512, 256, kernel_size=(1, 1), stride=(1, 1)
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
)
(2): ConvModule(
(conv): NNCFConv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1)
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
)
(3): ConvModule(
(conv): NNCFConv2d(
2048, 256, kernel_size=(1, 1), stride=(1, 1)
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
)
)
(fpn_convs): ModuleList(
(0): ConvModule(
(conv): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
)
(1): ConvModule(
(conv): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
)
(2): ConvModule(
(conv): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
)
(3): ConvModule(
(conv): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
)
)
)
(rpn_head): RPNHead(
(loss_cls): CrossEntropyLoss()
(loss_bbox): L1Loss()
(rpn_conv): NNCFConv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
(1): UpdateWeight(
(op): FilterPruningBlock()
)
(2): UpdateWeight(
(op): FilterPruningBlock()
)
(3): UpdateWeight(
(op): FilterPruningBlock()
)
(4): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(rpn_cls): NNCFConv2d(
256, 3, kernel_size=(1, 1), stride=(1, 1)
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
(1): UpdateWeight(
(op): FilterPruningBlock()
)
(2): UpdateWeight(
(op): FilterPruningBlock()
)
(3): UpdateWeight(
(op): FilterPruningBlock()
)
(4): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
(rpn_reg): NNCFConv2d(
256, 12, kernel_size=(1, 1), stride=(1, 1)
(pre_ops): ModuleDict(
(0): UpdateWeight(
(op): FilterPruningBlock()
)
(1): UpdateWeight(
(op): FilterPruningBlock()
)
(2): UpdateWeight(
(op): FilterPruningBlock()
)
(3): UpdateWeight(
(op): FilterPruningBlock()
)
(4): UpdateWeight(
(op): FilterPruningBlock()
)
)
(post_ops): ModuleDict()
)
)
(roi_head): StandardRoIHead(
(bbox_roi_extractor): SingleRoIExtractor(
(roi_layers): ModuleList(
(0): RoIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, pool_mode=avg, aligned=True, use_torchvision=False)
(1): RoIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, pool_mode=avg, aligned=True, use_torchvision=False)
(2): RoIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, pool_mode=avg, aligned=True, use_torchvision=False)
(3): RoIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, pool_mode=avg, aligned=True, use_torchvision=False)
)
)
(bbox_head): Shared2FCBBoxHead(
(loss_cls): CrossEntropyLoss()
(loss_bbox): L1Loss()
(fc_cls): NNCFLinear(
in_features=1024, out_features=81, bias=True
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(fc_reg): NNCFLinear(
in_features=1024, out_features=320, bias=True
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(shared_convs): ModuleList()
(shared_fcs): ModuleList(
(0): NNCFLinear(
in_features=12544, out_features=1024, bias=True
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
(1): NNCFLinear(
in_features=1024, out_features=1024, bias=True
(pre_ops): ModuleDict()
(post_ops): ModuleDict()
)
)
(cls_convs): ModuleList()
(cls_fcs): ModuleList()
(reg_convs): ModuleList()
(reg_fcs): ModuleList()
(relu): ReLU(inplace=True)
)
)
)
I have tried to replace it as below
if graph.get_output_edges(node):
out_edge = graph.get_output_edges(node)[0]
out_shape = out_edge.tensor_shape[2:]
else:
# For disconnected NNCFGraph when convolution layers have no output edge
out_shape = self._calculate_output_shape(graph, node)
nncf_logger.error("Node %s have no output edge in NNCFGraph", node.node_name)
And it raise error:
AttributeError: 'FilterPruningController' object has no attribute '_calculate_output_shape'
The nncf version is 2.0.0
@mkaglins
Hello, @dy1998 . You can use the latest nncf from the develop branch to solve your problem as 2.0.0 release does not contain the corresponding changes.
Hello, @dy1998 . You can use the latest nncf from the develop branch to solve your problem as 2.0.0 release does not contain the corresponding changes.
Thanks for your reply. I have tried develop branch. However, it still raise same error as below:
Traceback (most recent call last):
File "/data/dy/code/mmdetection-ote/tools/train.py", line 339, in <module>
main()
File "/data/dy/code/mmdetection-ote/tools/train.py", line 335, in main
meta=meta)
File "/data/dy/code/mmdetection-ote/mmdet/apis/train.py", line 103, in train_detector
compression_ctrl, model = wrap_nncf_model(model, cfg, data_loaders[0], get_fake_input)
File "/data/dy/code/mmdetection-ote/mmdet/integration/nncf/compression.py", line 223, in wrap_nncf_model
compression_state=compression_state)
File "/data/dy/anaconda3/envs/ote2/lib/python3.7/site-packages/nncf-2.0.0-py3.7.egg/nncf/torch/model_creation.py", line 147, in create_compressed_model
compression_ctrl = builder.build_controller(compressed_model)
File "/data/dy/anaconda3/envs/ote2/lib/python3.7/site-packages/nncf-2.0.0-py3.7.egg/nncf/torch/compression_method_api.py", line 163, in build_controller
ctrl = self._build_controller(model)
File "/data/dy/anaconda3/envs/ote2/lib/python3.7/site-packages/nncf-2.0.0-py3.7.egg/nncf/torch/pruning/filter_pruning/algo.py", line 103, in _build_controller
self.config)
File "/data/dy/anaconda3/envs/ote2/lib/python3.7/site-packages/nncf-2.0.0-py3.7.egg/nncf/torch/pruning/filter_pruning/algo.py", line 138, in __init__
self.flops_count_init()
File "/data/dy/anaconda3/envs/ote2/lib/python3.7/site-packages/nncf-2.0.0-py3.7.egg/nncf/torch/pruning/filter_pruning/algo.py", line 319, in flops_count_init
in_edge = graph.get_input_edges(node)[0]
IndexError: list index out of range
And corresponding code are shown as below. What should I do?
for node in graph.get_nodes_by_types([v.op_func_name for v in NNCF_LINEAR_MODULES_DICT]):
output_edges = graph.get_output_edges(node)
if output_edges:
out_edge = graph.get_output_edges(node)[0]
out_shape = out_edge.tensor_shape
self._modules_out_shapes[node.node_name] = out_shape[-1]
else:
# For disconnected NNCFGraph when node have no output edge
nncf_logger.error("Node %s have no output edge in NNCFGraph", node.node_name)
self._modules_out_shapes[node.node_name] = node.layer_attributes.out_features
in_edge = graph.get_input_edges(node)[0] # this line raise error
@dy1998 could you please provide instructions on how exactly to reproduce this on our side?
@dy1998 could you please provide instructions on how exactly to reproduce this on our side? Here is my pip list
addict 2.4.0
attrs 21.2.0
certifi 2021.5.30
charset-normalizer 2.0.6
cycler 0.10.0
Cython 0.29.24
editdistance 0.5.3
flatbuffers 2.0
idna 3.2
importlib-metadata 4.8.1
joblib 1.0.1
jsonschema 3.2.0
jstyleson 0.0.2
kiwisolver 1.3.2
lxml 4.6.3
matplotlib 3.4.3
mmcv-full 1.3.0
mmdet 2.9.0
mmpycocotools 12.0.3
natsort 7.1.1
networkx 2.6.3
ninja 1.10.2.1
nncf 2.0.0
numpy 1.21.2
onnx 1.10.1
onnxoptimizer 0.2.6
onnxruntime 1.9.0
opencv-python 4.5.3.56
packaging 21.0
pandas 1.3.3
Pillow 8.3.2
pip 21.2.2
Polygon3 3.0.8
protobuf 3.18.0
pydot 1.4.2
pyparsing 2.4.7
pyrsistent 0.18.0
python-dateutil 2.8.2
pytorchcv 0.0.55
pytz 2021.3
PyYAML 5.4.1
requests 2.26.0
scikit-learn 1.0
scipy 1.7.1
setuptools 58.0.4
six 1.16.0
terminaltables 3.1.0
texttable 1.6.4
threadpoolctl 3.0.0
torch 1.7.1
torchvision 0.8.2
tqdm 4.62.3
typing-extensions 3.10.0.2
urllib3 1.26.7
wheel 0.37.0
yapf 0.31.0
zipp 3.6.0
@dy1998 sorry for the huge delay. The code has changed in the meanwhile, so we would appreciate it if you tried again or gave us some more information on how to reproduce other than the pip requirements list. In particular, we need to have the exact code for the training pipeline and model that you are using or at least a minimal reproducer. Feel free to reopen this if the issue persists or you would like to post the information above.
Did you solve the problem?When i export yolov8-obb model to openvino format , same error occurred. i'm using the latest nncf
Hi @meaquanana! The problem from the issue was solved. Could you file a bug with a short reproducer to investigate your issue?