SlimSAM
SlimSAM copied to clipboard
list index out of range
I'm using the dataset from roboflow. Can you help me?
!CUDA_VISIBLE_DEVICES=0 python prune_distill_step1.py --traindata_path "/kaggle/working/Crop-Fields-LOD-13-14-15-4/train/_annotations.coco.json" --valdata_path "/kaggle/working/Crop-Fields-LOD-13-14-15-4/valid/_annotations.coco.json" --trainsize 480 --valsize 126 --prune_ratio 0.3 --epochs 20
===========================Parameter Settings===========================
Pruning Ratio: 0.3
VIT num_heads: 12
norm_type: mean
imptype: Disturb
global: False
learning rate: 0.0001
a_weight: 0.5
round_to 12
TRAIN_SIZE 480 VAL_SIZE 126 GRAD_SIZE 1000 Epochs 20
Traceback (most recent call last):
File "/kaggle/working/SlimSAM/prune_distill_step1.py", line 295, in
https://universe.roboflow.com/cropfields/crop-fields-lod-13-14-15/dataset/4
Hello @BuKeMod , Consider verifying if you have altered the '--gradsize' parameter. The default setting for 'gradsize' is 1000. It's important to ensure that the 'gradsize' value is less than the 'trainsize'. Thanks.
I've encountered a new issue. Do you have any examples of training, such as using Colab or Kaggle? I'm a beginner and don't quite understand yet. Thank you.
CUDA visible devices: 1
CUDA Device Name: Tesla P100-PCIE-16GB
===========================Parameter Settings===========================
Pruning Ratio: 0.3
VIT num_heads: 12
norm_type: mean
imptype: Disturb
global: False
learning rate: 0.0001
a_weight: 0.5
round_to 12
TRAIN_SIZE 40 VAL_SIZE 12 GRAD_SIZE 0 Epochs 5
===========================Pruning Start===========================
/kaggle/working/SlimSAM/torch_pruning/dependency.py:639: UserWarning: Unwrapped parameters detected: ['pos_embed', 'neck.3.bias', 'neck.3.weight', 'neck.1.bias', 'neck.1.weight'].
Torch-Pruning will prune the last non-singleton dimension of a parameter. If you wish to customize this behavior, please provide an unwrapped_parameters argument.
warnings.warn(warning_str)
Traceback (most recent call last):
File "/kaggle/working/SlimSAM/prune_distill_step1.py", line 295, in
Please ensure that the 'gradsize' parameter is set to a value greater than 0. The importance estimation algorithm relies on gradient information for each parameter, making it essential that 'gradsize' is not zero. Note that while 'gradsize' should be less than 'trainsize', a larger 'gradsize' can be more effective for pruning purposes. Thanks!