YOLO-World icon indicating copy to clipboard operation
YOLO-World copied to clipboard

可以给一份混合GQA数据集微调COCO的config文件吗?

Open tomgotjack opened this issue 1 year ago • 35 comments

我尝试给yolo_world_v2_l_vlpan_bn_2e-4_80e_8gpus_mask-refine_finetune_coco.py中直接添加 mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) 并把train_dataloader替换为 train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette'])) 这样搞没法运行。可以给一份config文件参考一下吗?

tomgotjack avatar May 07 '24 07:05 tomgotjack

你好,同问,蹲一个回答:D. 想要实现在保持开集的情况下微调,增加我自己的categories.

mandyxiaomeng avatar May 09 '24 19:05 mandyxiaomeng

@mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:

base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)

hyper-parameters

num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'

text_model_name = '../pretrained_models/clip-vit-base-patch32-projection'

text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False

model settings

model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))

dataset settings

text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ *base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), *base.last_transform[:-1], text_transform, ] train_pipeline_stage2 = [base.train_pipeline_stage2[:-1], *text_transform]

'''

''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)

mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette']))

test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), # ata_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/coco_class_texts.json', pipeline=test_pipeline) val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

test_evaluator = val_evaluator

training settings

default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')

evaluation settings

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。

tomgotjack avatar May 10 '24 11:05 tomgotjack

非常感谢!我试一试混合GOA和我自己的数据集

mandyxiaomeng avatar May 13 '24 10:05 mandyxiaomeng

非常感谢!我试一试混合GOA和我自己的数据集

你好,可以请问一下你自己制作的数据集是什么格式的吗,我也想用在自己数据集上,大概就是各种工具,螺丝刀剪刀这样,想请问一下你的数据集是什么格式吗?

dq1125 avatar May 14 '24 03:05 dq1125

@dq1125 照着COCO的格式转一份JSON标注文件就好

tomgotjack avatar May 14 '24 03:05 tomgotjack

@dq1125 照着COCO的格式转一份JSON标注文件就好

好的,非常感谢!

dq1125 avatar May 14 '24 03:05 dq1125

@tomgotjack 你好,可以传一份训练的log吗,我想对照一下我的训练过程, image 我这边刚开始微调的时候,loss没有明显的下降,想问一下是不是正常现象

ccl-private avatar Jun 25 '24 01:06 ccl-private

接着训练了一段时间,只有grad_norm有明显下降 image

ccl-private avatar Jun 25 '24 02:06 ccl-private

接着训练了一段时间,只有grad_norm有明显下降 image

LOSS没有明显下降是正常现象。你可以多练几轮,看看val的精度变化。我自己用的服务器,log没有保留下来。

tomgotjack avatar Jun 25 '24 03:06 tomgotjack

好的

ccl-private avatar Jun 25 '24 03:06 ccl-private

Epoch(val) [1][2500/2500] coco/bbox_mAP: 0.4730 coco/bbox_mAP_50: 0.6330 coco/bbox_mAP_75: 0.5190 coco/bbox_mAP_s: 0.3170 coco/bbox_mAP_m: 0.5210 coco/bbox_mAP_l: 0.5980 data_time: 0.0009 time: 0.0540

ccl-private avatar Jun 25 '24 06:06 ccl-private

@tomgotjack 你好,为什么我在使用你的配置文件训练时,过了两个epoch,grad_norm变得很大,随后变成0,你知道这是什么原因吗?我也是使用COCO+GQA进行微调,使用YOLOWorldDetector,4张gpu,batchsize_per_gpu=8,base_lr=1e-4。

Ricardoluffy avatar Jul 19 '24 02:07 Ricardoluffy

抱歉,我没有出现这个问题。目前我的环境没有显卡,不方便测试,你再找找其他原因吧

发送自我的盖乐世

-------- 原始信息 -------- 发件人: Ricardoluffy @.> 日期: 2024/7/19 10:18 (GMT+08:00) 收件人: AILab-CVC/YOLO-World @.> 抄送: tomgotjack @.>, Mention @.> 主题: Re: [AILab-CVC/YOLO-World] 可以给一份混合GQA数据集微调COCO的config文件吗? (Issue #299)

@tomgotjackhttps://github.com/tomgotjack 你好,为什么我在使用你的配置文件训练时,过了两个epoch,grad_norm变得很大,随后变成0,你知道这是什么原因吗?我也是使用COCO+GQA进行微调,使用YOLOWorldDetector,4张gpu,batchsize_per_gpu=8,base_lr=1e-4。

― Reply to this email directly, view it on GitHubhttps://github.com/AILab-CVC/YOLO-World/issues/299#issuecomment-2237936492, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMJDFMQTFWGCP7LVVPVNPZDZNBZQTAVCNFSM6AAAAABHKN7HDKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZXHEZTMNBZGI. You are receiving this because you were mentioned.Message ID: @.***>

tomgotjack avatar Jul 19 '24 02:07 tomgotjack

@tomgotjack 再次请教个问题,我在自己的数据集(28类)进行微调,单独只用自己的数据集没问题,可以正常训练。但是混合GQA之后,就报错了,错误为:

IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/dataset_wrapper.py", line 171, in getitem return self.datasets[dataset_idx][sample_idx] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 410, in getitem data = self.prepare_data(idx) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmyolo/datasets/yolov5_coco.py", line 53, in prepare_data return self.pipeline(data_info) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 60, in call data = t(data) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in call return self.transform(results) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmdet/datasets/transforms/formatting.py", line 100, in transform self.mapping_table[key]] = results[key][valid_idx] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmdet/structures/bbox/base_boxes.py", line 145, in getitem boxes = boxes[index] IndexError: index 29 is out of bounds for dimension 0 with size 29

不知道为什么会发生这种情况?

Ricardoluffy avatar Aug 01 '24 07:08 Ricardoluffy

@mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:

base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)

hyper-parameters

num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'

text_model_name = '../pretrained_models/clip-vit-base-patch32-projection'

text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False

model settings

model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))

dataset settings

text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ *base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), *base.last_transform[:-1], _text_transform, ] train_pipeline_stage2 = [_base.train_pipeline_stage2[:-1], *text_transform]

'''

''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)

mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette']))

test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), # ata_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/coco_class_texts.json', pipeline=test_pipeline) val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

test_evaluator = val_evaluator

training settings

default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')

evaluation settings

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。

@mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:

base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)

hyper-parameters

num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'

text_model_name = '../pretrained_models/clip-vit-base-patch32-projection'

text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False

model settings

model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))

dataset settings

text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ *base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), *base.last_transform[:-1], _text_transform, ] train_pipeline_stage2 = [_base.train_pipeline_stage2[:-1], *text_transform]

'''

''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)

mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette']))

test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), # ata_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/coco_class_texts.json', pipeline=test_pipeline) val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

test_evaluator = val_evaluator

training settings

default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')

evaluation settings

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。

你好,请问在这个配置文件里的四个数据集的json和数据都要下载下来吗,还是只需要下载GQA的就可以了?

wenqiuL avatar Aug 05 '24 07:08 wenqiuL

有coco和GQA就行了,别的不加载。你看看代码加载了什么数据集就行。

发送自我的盖乐世

-------- 原始信息 -------- 发件人: wenqiuL @.> 日期: 2024/8/5 15:29 (GMT+08:00) 收件人: AILab-CVC/YOLO-World @.> 抄送: tomgotjack @.>, Mention @.> 主题: Re: [AILab-CVC/YOLO-World] 可以给一份混合GQA数据集微调COCO的config文件吗? (Issue #299)

@mandyxiaomenghttps://github.com/mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:

base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)

hyper-parameters

num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'

text_model_name = '../pretrained_models/clip-vit-base-patch32-projection'

text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False

model settings

model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))

dataset settings

text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ *base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), *base.last_transform[:-1], _text_transform, ] train_pipeline_stage2 = [_base.train_pipeline_stage2[:-1], *text_transform]

'''

''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)

mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette']))

test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), # ata_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/coco_class_texts.json', pipeline=test_pipeline) val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

test_evaluator = val_evaluator

training settings

default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')

evaluation settings

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。

@mandyxiaomenghttps://github.com/mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:

base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)

hyper-parameters

num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'

text_model_name = '../pretrained_models/clip-vit-base-patch32-projection'

text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False

model settings

model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))

dataset settings

text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ *base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), *base.last_transform[:-1], _text_transform, ] train_pipeline_stage2 = [_base.train_pipeline_stage2[:-1], *text_transform]

'''

''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)

mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette']))

test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), # ata_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/coco_class_texts.json', pipeline=test_pipeline) val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

test_evaluator = val_evaluator

training settings

default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')

evaluation settings

val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')

使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。

你好,请问在这个配置文件里的四个数据集的json和数据都要下载下来吗,还是只需要下载GQA的就可以了?

― Reply to this email directly, view it on GitHubhttps://github.com/AILab-CVC/YOLO-World/issues/299#issuecomment-2268367108, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMJDFMTSKRKDPLDBQPLKTF3ZP4SV7AVCNFSM6AAAAABHKN7HDKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRYGM3DOMJQHA. You are receiving this because you were mentioned.Message ID: @.***>

tomgotjack avatar Aug 05 '24 07:08 tomgotjack

@tomgotjack 好的,我根据以下代码发现了可以只加载QGA数据 train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette'])) 但是问题是 data.md提供的下载链接似乎已经失效了,找不到final_mixed_train_no_coco.json,能否方便提供一下这部分内容的链接呢?非常感谢 mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline)

wenqiuL avatar Aug 05 '24 07:08 wenqiuL

抱歉,我已经几个月没碰这个项目了,目前也没有相关资料存下来。你自己多找找吧,相信我能找到的东西都很容易找到

发送自我的盖乐世

-------- 原始信息 -------- 发件人: wenqiuL @.> 日期: 2024/8/5 15:38 (GMT+08:00) 收件人: AILab-CVC/YOLO-World @.> 抄送: tomgotjack @.>, Mention @.> 主题: Re: [AILab-CVC/YOLO-World] 可以给一份混合GQA数据集微调COCO的config文件吗? (Issue #299)

@tomgotjackhttps://github.com/tomgotjack 好的,我根据以下代码发现了可以只加载QGA数据 train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[

obj365v1_train_dataset,

flickr_train_dataset,

coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette'])) 但是问题是 data.md提供的下载链接似乎已经失效了,找不到final_mixed_train_no_coco.json,能否方便提供一下这部分内容的链接呢?非常感谢 mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline)

― Reply to this email directly, view it on GitHubhttps://github.com/AILab-CVC/YOLO-World/issues/299#issuecomment-2268383412, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMJDFMSJI7SQ72ZIAEYYQ5LZP4TY5AVCNFSM6AAAAABHKN7HDKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRYGM4DGNBRGI. You are receiving this because you were mentioned.Message ID: @.***>

tomgotjack avatar Aug 05 '24 07:08 tomgotjack

@tomgotjack 再次请教个问题,我在自己的数据集(28类)进行微调,单独只用自己的数据集没问题,可以正常训练。但是混合GQA之后,就报错了,错误为:

IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/dataset_wrapper.py", line 171, in getitem return self.datasets[dataset_idx][sample_idx] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 410, in getitem data = self.prepare_data(idx) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmyolo/datasets/yolov5_coco.py", line 53, in prepare_data return self.pipeline(data_info) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 60, in call data = t(data) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in call return self.transform(results) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmdet/datasets/transforms/formatting.py", line 100, in transform self.mapping_table[key]] = results[key][valid_idx] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmdet/structures/bbox/base_boxes.py", line 145, in getitem boxes = boxes[index] IndexError: index 29 is out of bounds for dimension 0 with size 29

不知道为什么会发生这种情况?

请问你的问题解决了嘛?

pengc-bjtu avatar Oct 25 '24 05:10 pengc-bjtu

@tomgotjack 再次请教个问题,我在自己的数据集(28类)进行微调,单独只用自己的数据集没问题,可以正常训练。但是混合GQA之后,就报错了,错误为: IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/dataset_wrapper.py", line 171, in getitem return self.datasets[dataset_idx][sample_idx] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 410, in getitem data = self.prepare_data(idx) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmyolo/datasets/yolov5_coco.py", line 53, in prepare_data return self.pipeline(data_info) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 60, in call data = t(data) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in call return self.transform(results) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmdet/datasets/transforms/formatting.py", line 100, in transform self.mapping_table[key]] = results[key][valid_idx] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmdet/structures/bbox/base_boxes.py", line 145, in getitem boxes = boxes[index] IndexError: index 29 is out of bounds for dimension 0 with size 29 不知道为什么会发生这种情况?

请问你的问题解决了嘛?

您好,我也遇到了找个问题,想问您解决了这个问题没有

zyh1122 avatar Nov 26 '24 02:11 zyh1122

@zyh1122 这个问题我也遇到过,依稀记得是与类别有关的bug。再详细的解决办法时间太久记不太清了。后续混合之后由于效果下降太多就放弃了这个方法。

wenqiuL avatar Nov 26 '24 02:11 wenqiuL

@zyh1122 这个问题我也遇到过,依稀记得是与类别有关的bug。再详细的解决办法时间太久记不太清了。后续混合之后由于效果下降太多就放弃了这个方法。 我改了改目标检测网络,但是如果不用pretrain的话,是不是就无法得到这种开集目标检测的能力。我是希望基于我改进的目标检测这块,先把这个开集的功能实现。但是,很难受,这个bug卡住了。我的num_classes = 1203和num_training_classes = 10搞不懂该如何设置了,毕竟这个数据集在fine_coco的时候是可以用的。而且单张显卡,训练起来,时间太久了。

zyh1122 avatar Nov 26 '24 02:11 zyh1122

抱歉,过去半年,我已经不记得怎么操作的了。只记得是可以混合GQA和自己的数据集进行微调的,这样能保留开集能力,并提高对自己数据集的识别效果。如果只用自己的数据集微调,会丢失开集能力,但相应的,识别自己数据集的效果是最好的。 以coco为例,微调之后为53.3AP,丢失开集能力;混合coco和GQA,微调后检测coco为50.0AP,开集能力在lvis mini数据集上从30AP降到29AP

发送自我的盖乐世

-------- 原始信息 -------- 发件人: zyh1122 @.> 日期: 2024/11/26 10:14 (GMT+08:00) 收件人: AILab-CVC/YOLO-World @.> 抄送: tomgotjack @.>, Mention @.> 主题: Re: [AILab-CVC/YOLO-World] 可以给一份混合GQA数据集微调COCO的config文件吗? (Issue #299)

@tomgotjackhttps://github.com/tomgotjack 再次请教个问题,我在自己的数据集(28类)进行微调,单独只用自己的数据集没问题,可以正常训练。但是混合GQA之后,就报错了,错误为: IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/dataset_wrapper.py", line 171, in getitem return self.datasets[dataset_idx][sample_idx] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 410, in getitem data = self.prepare_data(idx) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmyolo/datasets/yolov5_coco.py", line 53, in prepare_data return self.pipeline(data_info) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 60, in call data = t(data) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in call return self.transform(results) File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmdet/datasets/transforms/formatting.py", line 100, in transform self.mapping_table[key]] = results[key][valid_idx] File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/mmdet/structures/bbox/base_boxes.py", line 145, in getitem boxes = boxes[index] IndexError: index 29 is out of bounds for dimension 0 with size 29 不知道为什么会发生这种情况?

请问你的问题解决了嘛?

您好,我也遇到了找个问题,想问您解决了这个问题没有

― Reply to this email directly, view it on GitHubhttps://github.com/AILab-CVC/YOLO-World/issues/299#issuecomment-2499494200, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMJDFMR6T4T45TI3DC2L3DT2CPKPVAVCNFSM6AAAAABHKN7HDKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOJZGQ4TIMRQGA. You are receiving this because you were mentioned.Message ID: @.***>

tomgotjack avatar Nov 26 '24 02:11 tomgotjack

我对应了类别,想办法改,也不合适啊,因为我debug发现,他是先通过GQA数据集对应类别,但是本身GQA没有类别吧,是数据集加载这部分有问题吗?我的num_classes部分,无论怎么更改都不合适。

zyh1122 avatar Dec 16 '24 02:12 zyh1122

我对应了类别,想办法改,也不合适啊,因为我debug发现,他是先通过GQA数据集对应类别,但是本身GQA没有类别吧,是数据集加载这部分有问题吗?我的num_classes部分,无论怎么更改都不合适。

我自己修改了目标检测模型部分,所以无法预加载权重,只能重新训练,但是在fine_coco下面的配置文件中跑通了,是可行的,我现在加载GQA进行开集目标检测的时候,会遇到问题: boxes = boxes[index] IndexError: index 10 is out of bounds for dimension 0 with size 10

zyh1122 avatar Dec 16 '24 02:12 zyh1122

Hi,我在混合训练Visdrone数据集和GQA的时候,给我报了IndexError: Caught IndexError in DataLoader worker process 0.和IndexError: index 12 is out of bounds for dimension 0 with size 12,我在github上并没有找到解决方法,请问你能给我提供一下建议吗 @wondervictor @

yunfei-ai avatar Apr 08 '25 10:04 yunfei-ai

Hi,我在混合训练Visdrone数据集和GQA的时候,给我报了IndexError: Caught IndexError in DataLoader worker process 0.和IndexError: index 12 is out of bounds for dimension 0 with size 12,我在github上并没有找到解决方法,请问你能给我提供一下建议吗 @wondervictor @

您好,请问您的问题解决了吗?我也遇到了同样的问题

Milk-Ustinian avatar Apr 17 '25 17:04 Milk-Ustinian

只需提高num_class的数量就行了

---原始邮件--- 发件人: @.> 发送时间: 2025年4月18日(周五) 凌晨1:37 收件人: @.>; 抄送: @.@.>; 主题: Re: [AILab-CVC/YOLO-World] 可以给一份混合GQA数据集微调COCO的config文件吗? (Issue #299)

Hi,我在混合训练Visdrone数据集和GQA的时候,给我报了IndexError: Caught IndexError in DataLoader worker process 0.和IndexError: index 12 is out of bounds for dimension 0 with size 12,我在github上并没有找到解决方法,请问你能给我提供一下建议吗 @wondervictor @

您好,请问您的问题解决了吗?我也遇到了同样的问题

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***> Milk-Ustinian left a comment (AILab-CVC/YOLO-World#299)

Hi,我在混合训练Visdrone数据集和GQA的时候,给我报了IndexError: Caught IndexError in DataLoader worker process 0.和IndexError: index 12 is out of bounds for dimension 0 with size 12,我在github上并没有找到解决方法,请问你能给我提供一下建议吗 @wondervictor @

您好,请问您的问题解决了吗?我也遇到了同样的问题

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

yunfei-ai avatar Apr 17 '25 17:04 yunfei-ai

只需提高num_class的数量就行了

非常感谢您的建议,只需要修改num_class吗,num_training_classes需要修改吗?我使用的也是Visdrone数据集,请问您设置的是多少呀?

Milk-Ustinian avatar Apr 17 '25 17:04 Milk-Ustinian

num_class=1203,因为我不知道具体gqa数据集是多少类别

---原始邮件--- 发件人: @.> 发送时间: 2025年4月18日(周五) 凌晨1:46 收件人: @.>; 抄送: @.@.>; 主题: Re: [AILab-CVC/YOLO-World] 可以给一份混合GQA数据集微调COCO的config文件吗? (Issue #299)

只需提高num_class的数量就行了 …

非常感谢您的建议,只需要修改num_class吗,num_training_classes需要修改吗?我使用的也是Visdrone数据集,请问您设置的是多少呀?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***> Milk-Ustinian left a comment (AILab-CVC/YOLO-World#299)

只需提高num_class的数量就行了 …

非常感谢您的建议,只需要修改num_class吗,num_training_classes需要修改吗?我使用的也是Visdrone数据集,请问您设置的是多少呀?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

yunfei-ai avatar Apr 17 '25 17:04 yunfei-ai