Torch-Pruning
Torch-Pruning copied to clipboard
GroupNormPruner for sparse training
Hi @VainF , I'm playing around with sparse training of GroupNormPruner
for Yolov8. I read your instruction and it said I have to call pruner.update_regularizer()
to initialize regularizer. However, I only see the method update_regularizor()
in this pruner.
- Does this method is used for sparse training?
- When I tried to prune the Detect head of yolov8 with
MagnitudePruner
, I only ignore the last layers of Detect head and it worked normally. However, when I tryGroupNormPruner
to sparse train, it raised error:
for m in model.model.modules():
if isinstance(m, (Detect,)):
for modulelist in m.cv2:
ignored_layers.append(modulelist[-1])
for modulelist in m.cv3:
ignored_layers.append(modulelist[-1])
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [18,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [19,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [21,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
...
I see that in GroupNormPruner
, you have self._groups = list(self.DG.get_all_groups(root_module_types=self.root_module_types, ignored_layers=self.ignored_layers))
, I think this error comes from here.